Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
By working you mean only on the windows side right?

On the OSX Side OpenCL is still broken and the link speed is 2.5 instead of 5 right? or has that changed

Yes, Sadly OpenCL does not work. At least the Luxmark benchmark does not.
OpenGL games are definitely faster than on my GTX 680 and since I use Windows a lot lately I was willing to compromise. I am crossing my fingers hoping that Macvidcards and other folks can eventually bring full functionality. But as of now I am quite happy with my purchase.
 
By working you mean only on the windows side right?

On the OSX Side OpenCL is still broken and the link speed is 2.5 instead of 5 right? or has that changed

We have made some progress, I can enable 5.0 in OSX and Windows now but Nvidia has not made it easy. It is likely that a future OSX update will bring 5.0 in OSX. Windows will always require our mods.

Still working on the boot screens.
 
We have made some progress, I can enable 5.0 in OSX and Windows now but Nvidia has not made it easy. It is likely that a future OSX update will bring 5.0 in OSX. Windows will always require our mods.

Still working on the boot screens.


by mods you mean to the software on the card not to the physical card itself right?
 
We have made some progress, I can enable 5.0 in OSX and Windows now but Nvidia has not made it easy. It is likely that a future OSX update will bring 5.0 in OSX. Windows will always require our mods.

Still working on the boot screens.

Beware - What Nvidia hath given, it appears that Nvidia has now begun to take away, piece by piece. The latest driver, utility, etc. download appears to be designed to limit Titan's advantages when you have two or more of them in the same system. See post #580 here: https://forums.macrumors.com/showthread.php?p=17147835#post17147835 .
 
Most likely "yes," if you're running them strictly by the book. But the way I run mine, I'd have to have more headroom.

How many Titans would you feel comfortable fitting into a 1000W PCIe expansions system and still have the "headroom" you quote?

----------

Most likely "yes," if you're running them strictly by the book. But the way I run mine, I'd have to have more headroom.

Also, would the NA255A PCIe 3.0 x16 slot card be backwards compatible with the PCIe 2.0 slots in a MacPro4,1 flashed to 5,1 (i.e. the PCIe 3.0 x16 card would fit in a PCIe 2.0 slot and function at reduced 2.0 bandwidth)?
 
Also, would the NA255A PCIe 3.0 x16 slot card be backwards compatible with the PCIe 2.0 slots in a MacPro4,1 flashed to 5,1 (i.e. the PCIe 3.0 x16 card would fit in a PCIe 2.0 slot and function at reduced 2.0 bandwidth)?

I can't speak from experience, but I did do some reading on this product this morning. Tom's Hardware did a review of it last month, and it actually defaults to PCI 2.0 via jumpers. So I'd assume that, yes, it will work just fine in a PCI 2.0 box like a Mac Pro.

Bear in mind this isn't an inexpensive add-on. It's around $2200 or so, give or take.

jas
 
How many Titans would you feel comfortable fitting into a 1000W PCIe expansions system and still have the "headroom" you quote?

By my prior statement, I wasn't excluding the owner swapping out the 1000W PSU for a 1500W PSU, which is what I would do, and I'd use the 1000W PSU in one of my other systems where I may have originally cheaped out on the PSU. IF I couldn't swap out the PSU, then I would put only three Titans in there because I would run them at full bore. But remember, I running my Titans on my self-built EVGA SR-2 (that has PCI-e 2.0 slots) where I can do watt churning speed enhancements. Not so on the Mac Pro yet; so you can now run a Titan or four only at factory. So you could put 4 in a 1000W Netstor connected to a Mac Pro without problem.


Also, would the NA255A PCIe 3.0 x16 slot card be backwards compatible with the PCIe 2.0 slots in a MacPro4,1 flashed to 5,1 (i.e. the PCIe 3.0 x16 card would fit in a PCIe 2.0 slot and function at reduced 2.0 bandwidth)?

This question assumes that there is a reason to buy the NA255A PCIe 3.0 x16 slot card in the first place if you have a Mac Pro when you can't take full advantage of it. Yes, it is backwards compatible, allowing you to run it on a Mac Pro (all Mac Pros are limited to PCI-e 2.0, except for the early ones which are PCI-e 1). But if you have a Mac Pro, why not just buy the NA250A here:http://www.mypccase.com/pcexboxwistb2.html - or here: http://www.bhphotovideo.com/c/produ...SLOT_PCIe_EXPANSION_ENC_PCIe_6_2_DESKTOP.html and use the $300 difference ($2199 - $1899) to buy the PC/Mac Host interface card which is sold separately for $189 for both of the external chassis (somewhat deceptively, the ads mentions that the Netstor comes with the Host Adapter card - but that's the card that fits in the Netstor, not the Host card that fits in your Mac/PC) and give someone special a good $110 dinner. I can't do dinner out, however, until June. But if you, like me, also have a Sandy Bridge system with PCI-e 3.0 slots, then buying the NA255A makes more sense; but I'd still swap out the PSU to get, at least, 500 more watts of power.

----------

I can't speak from experience, but I did do some reading on this product this morning. Tom's Hardware did a review of it last month, and it actually defaults to PCI 2.0 via jumpers. So I'd assume that, yes, it will work just fine in a PCI 2.0 box like a Mac Pro.

Bear in mind this isn't an inexpensive add-on. It's around $2200 or so, give or take.

jas
Absolutely correct, again on all fronts.
 
Last edited:
By my prior statement, I wasn't excluding the owner swapping out the 1000W PSU for a 1500W PSU, which is what I would do, and I'd use the 1000W PSU in one of my other systems where I may have originally cheaped out on the PSU. IF I couldn't swap out the PSU, then I would put only three Titans in there because I would run them at full bore. But remember, I running my Titans on my self-built EVGA SR-2 (that has PCI-e 2.0 slots) where I can do watt churning speed enhancements. Not so on the Mac Pro yet; so you can now run a Titan or four only at factory. So you could put 4 in a 1000W Netstor connected to a Mac Pro without problem..

The Titan specs show 250 W for the Graphics Card Power. If the Titan is only going to be used at default specification, then you're saying there isn't really a need to worry about "margin of [wattage] error" for the total wattage draw across the 4 Titans (i.e. 250 W x 4 = 1000 W, the exact total wattage for the NA255A)?


This question assumes that there is a reason to buy the NA255A PCIe 3.0 x16 slot card in the first place if you have a Mac Pro when you can't take full advantage of it. Yes, it is backwards compatible, allowing you to run it on a Mac Pro (all Mac Pros are limited to PCI-e 2.0, except for the early ones which are PCI-e 1). But if you have a Mac Pro, why not just buy the NA250A here:http://www.mypccase.com/pcexboxwistb2.html - or here: http://www.bhphotovideo.com/c/produ...SLOT_PCIe_EXPANSION_ENC_PCIe_6_2_DESKTOP.html and use the $300 difference ($2199 - $1899) to buy the PC/Mac Host interface card which is sold separately for $189 for both of the external chassis (somewhat deceptively, the ads mentions that the Netstor comes with the Host Adapter card - but that's the card that fits in the Netstor, not the Host card that fits in your Mac/PC) and give someone special a good $110 dinner. I can't do dinner out, however, until June. But if you, like me, also have a Sandy Bridge system with PCI-e 3.0 slots, then buying the NA255A makes more sense; but I'd still swap out the PSU to get, at least, 500 more watts of power. .


Wow, I need to learn to read the fine print. It doesn't come with the PC/Mac Host Interface card? The idea of getting the PCIe 3.0 version was more for future proofing, as it would be an investment that should continue to do well for many years as I scale into my GPUs.

Haha, a special dinner is nice though. My wife would appreciate your consideration! Thanks Tutor!
 
The Titan specs show 250 W for the Graphics Card Power. If the Titan is only going to be used at default specification, then you're saying there isn't really a need to worry about "margin of [wattage] error" for the total wattage draw across the 4 Titans (i.e. 250 W x 4 = 1000 W, the exact total wattage for the NA255A)?

Absolutely correct. The same applies to the NA250A. Moreover, with ingenuity you can swap that PSU in the future if need be.

Wow, I need to learn to read the fine print. It doesn't come with the PC/Mac Host Interface card? The idea of getting the PCIe 3.0 version was more for future proofing, as it would be an investment that should continue to do well for many years as I scale into my GPUs.

We all need to be better at reading the fine print. I didn't notice it at first until I read, a couple of times, what was actually included. At least B&H Photo has it listed along the right side towards the bottom of the page as "Essential Accessory."

There's nothing wrong with future proofing so long as it's intentional and affordable. I was just pointing out an option that I didn't know whether you were aware of.


Haha, a special dinner is nice though. My wife would appreciate your consideration! Thanks Tutor!
You and your better half are both welcome. Always put your wife above all else so long as you breathe.
 
Wow, I need to learn to read the fine print. It doesn't come with the PC/Mac Host Interface card?

The NP960A that Tutor referenced is only PCI v2.0 capable, and only has an 8x connector on it for interfacing with the NA250A device. If you decide to move forward with the NA255A for PCI v3.0, then you'll need to get adapter card NP970A instead. I'm not sure if there's a difference in cost between the two as I haven't been able to find the v3.0 version. Perhaps Tutor has had better luck.

jas
 
The NP960A that Tutor referenced is only PCI v2.0 capable, and only has an 8x connector on it for interfacing with the NA250A device. If you decide to move forward with the NA255A for PCI v3.0, then you'll need to get adapter card NP970A instead. I'm not sure if there's a difference in cost between the two as I haven't been able to find the v3.0 version. Perhaps Tutor has had better luck.

jas

Also, I've been chatting with a guy at Dynapower USA and he had a few other details to share regarding power:

The NA255A will not power up and down in sync with the computer. In
order to let the BIOS of server or workstation identify and assign
resources appropriately, make sure to power on the NA255A first, and
then power on server or workstation.

The built-in power supply can support up to 1000W at maximum. It will be
running and provide power based on the actual "loading" of your GPU or
PCIe cards.


Any ideas for how to get the NA255A to sync up its power with the workstation when powering up/down and sleeping/waking?
 
Any ideas for how to get the NA255A to sync up its power with the workstation when powering up/down and sleeping/waking?

I can't help, I'm afraid. The NA250A has an online manual in PDF format that you can download and look. There's very clearly a set of jumpers that you change to alter that very characteristic: power on/off on its own or power on/off with the host computer.

I'd ASS-ume that the NA255A has the same jumper set on it, but without seeing the manual I dunno.

jas
 
Ooooouuucch! Need > one Titan, don't forget self-builds.

The NP960A that Tutor referenced is only PCI v2.0 capable, and only has an 8x connector on it for interfacing with the NA250A device. If you decide to move forward with the NA255A for PCI v3.0, then you'll need to get adapter card NP970A instead. I'm not sure if there's a difference in cost between the two as I haven't been able to find the v3.0 version. Perhaps Tutor has had better luck.

jas

Thanks jas. You're right again, as usual. I called a retailer, MyPCCase.com [1(888) 685-3962], who said that it'll take up to four weeks to get the $2,199.00 NA255A enclosure and the required adapter card (the NP970A - $249.00) to put into the computer, for a grand total of $2,448.

By limiting the discussion to choosing between the NA255A and NA250A to power up to 4 Titans, omits another alternative. I just read this article about the NA255A: [ http://www.tomshardware.com/reviews/turbobox-na255a-pci-express,3430.html ], which concludes:

"Now, how about this product's value? Netstor is asking about $2,200 for its NA255A. So, right off the bat, ouch. You could build a killer workstation including three Radeon HD 7970s for that much money. Granted, you'd still need to find the right case, the right power supply, a compatible motherboard, and then cool it all. But we're Tom's Hardware; that's what we do. For that reason, we find it hard to imagine where the TurboBox makes sense for a PC builder.

But what about someone working on a Mac Pro? Apple's more limited ecosystem means there is no such thing as a three- or four-way graphics array. This could be one of the only options for enabling multiple GPUs. If massive compute potential is important, you might need to swallow hard and consider Netstor's solution the cost of doing business in Apple's world."]
[Emphasis added.]

I've been using Apples since 1984 and my throat is worn so thin from swallowing hard that its given me laryngitis. Given the current predicament surrounding what Apple has in store for us who need a truck or two or many more, l'm sticking with first running my Titans in my self-built PCs and will further explore PC to PC interconnect options for rendering needs. One could purchase a SuperMicro GPU SuperWorkstation [ http://www.supermicro.com/products/system/4U/7047/SYS-7047GR-TRF.cfm ] designed for optimal performance in a high-density 4U form-factor, for about $1585. Additionally, there a fifth PCI-e slot for adding a single slot width video card such as a ($110) GT 640 for interactivity while those Titans do there thing). The 7047 SuperServer has excellent expansion capability and feature set:

Product Name - E5-2600 Series, Tower, GPU Ready, X9DRG-QF
Product Type - Barebone System
Green Compliant - Yes
Green Compliance Certificate/Authority - RoHS
Number of External 5.25" Bays - 3
Chipset Manufacturer - Intel
Chipset Model - C602
Graphics Controller Manufacturer - Matrox
Graphics Controller Model - G200
Number of SATA Interfaces - 10
Number of Processors Supported - 2
RAID Supported - Yes
Ethernet Technology - Gigabit Ethernet Fast Ethernet
Number of Total Expansion Bays - 12
Number of 3.5" Bays - 9
Number of Total Expansion Slots - 7
Number of PCI Express x16 Slots - 5 (4 of which are double width)
VGA - Yes
Total Number of USB Ports - 9
QuickPath Interconnect Supported - Yes
Memory Standard DDR3-1333/PC3-10600
Height - 18.2"
Width - 7.0"
Depth - 26.5"
Color Dark - Gray
Memory Technology - DDR3 SDRAM
Product Series - 7047
Processor Socket - Socket R LGA-2011
Rack Height - 4U
Form Factor - Tower
Product Model - 7047GR-TRF
Processor Supported - Xeon
Product Line - SuperWorkstation
Controller Type - Serial ATA/600
Maximum Memory - 512 GB
Number of Power Supplies Installed - 2
Number of Total Memory Slots - 16
QuickPath Interconnect - 8 GT/s
Number of PCI Express x8 Slots - 2
Network (RJ-45) - Yes
Maximum Power Supply Wattage - two 1,620W units
Number of USB 2.0 Ports - 9
Certifications & Standards - ACPI 1.0 / 2.0
RAID Levels - 0, 1, 1+0, 5
Input Voltage - 220 V AC
Product Family - SuperWorkstation 7047
Limited Warranty - 3 Year

Then all you need to do is buy and install:

(1) CPUs, e.g., two Intel Xeon E5-2620 Sandy Bridge-EP 2.0GHz (2.5GHz Turbo Boost) 15MB L3 Cache LGA 2011 95W Six-Core Server Processor BX80621E52620: [ http://www.newegg.com/Product/Product.aspx?Item=N82E16819117269 ] for $425 each (total $850 w/o cooling - I just opt for 4 heatsinks like hese: [ https://www.superbiiz.com/detail.php?name=FAN-R16 ] ($31 each - total $124) given the massive cooling those strategically placed fans inside the server yield),
(2) ram, e.g., one Server Memory Kit Kingston KVR16R11D4K4/32 DDR3-1600 32GB(4x 8GB) 1Gx72 ECC/REG CL11 [ https://www.superbiiz.com/query.php...(4x+8GB)+1Gx72+ECC/REG+CL11+Server+Memory+Kit ] ($302),
(3) storage, e.g., one OCZ Vertex 3 VTX3-25SAT3-120G 2.5" 120GB SATA III MLC Internal Solid State Drive (SSD) [ http://www.newegg.com/Product/Product.aspx?Item=N82E16820227706 ] ($130) and
(4) up to four Titans, for a total cost, before the GPUs, of $2,991, excluding shipping. What kind of processors, storage and memory can you put in that NA250A or NA255A device? None.

Sure $2,991 is $543 more than $2,448, and doesn't take into account OS and application software costs. But, the applications the user intends to use with the Titan GPU farm and whether the user opts for Windows or Linux will make or break the software cost issue, and, like I said earlier, I go Ooooouuucch! when it comes to spending about $2500 for 4 PCI-e slots, a 1000W PSU, a case and two interconnect cards and cables when I could get a powerful computer for about $550 more, and my throat is raw from Apple's dilly-dallying around about the Mac Pro. Sure, I set forth a barebones configuration, but that's only for those with very limited funds and want an alternative to the barest of bones you'd get from purchasing a PCI-e chassis. So, I make the above suggestion for those willing to consider another alternative. This suggestion I consider to be future-proofing on steroids.
 
Last edited:
... . Any ideas for how to get the NA255A to sync up its power with the workstation when powering up/down and sleeping/waking?

See my most recent post just before this one, above. For powering up and down you'll just have to start the NA255A first (and I would suspect) power it down last. Like jas, I do not know the correct answer to this question because when I click on the link for the NA255A manual, the NA250A manual opens. However, I would suspect that the cards in the NA255A and the NA250A will either have insomnia or behave just as they would if they were inside the computer system, so awakening the Mac Pro from sleep shouldn't be at all problematic. Just a guess, however.
 
"Now, how about this product's value? Netstor is asking about $2,200 for its NA255A. So, right off the bat, ouch. You could build a killer workstation including three Radeon HD 7970s for that much money. Granted, you'd still need to find the right case, the right power supply, a compatible motherboard, and then cool it all. But we're Tom's Hardware; that's what we do. For that reason, we find it hard to imagine where the TurboBox makes sense for a PC builder.

But what about someone working on a Mac Pro? Apple's more limited ecosystem means there is no such thing as a three- or four-way graphics array. This could be one of the only options for enabling multiple GPUs. If massive compute potential is important, you might need to swallow hard and consider Netstor's solution the cost of doing business in Apple's world."]
[Emphasis added.]

I've been using Apples since 1984 and my throat is worn so thin from swallowing hard that its given me laryngitis. Given the current predicament surrounding what Apple has in store for us who need a truck or two or many more, l'm sticking with first running my Titans in my self-built PCs and will further explore PC to PC interconnect options for rendering needs. One could purchase a SuperMicro GPU SuperWorkstation [ http://www.supermicro.com/products/system/4U/7047/SYS-7047GR-TRF.cfm ] designed for optimal performance in a high-density 4U form-factor, for about $1585. Additionally, there a fifth PCI-e slot for adding a single slot width video card such as a ($110) GT 640 for interactivity while those Titans do there thing). The 7047 SuperServer has excellent expansion capability and feature set:

Sure $2,991 is $543 more than $2,448, and doesn't take into account OS and application software costs. But, the applications the user intends to use with the Titan GPU farm and whether the user opts for Windows or Linux will make or break the software cost issue, and, like I said earlier, I go Ooooouuucch! when it comes to spending about $2500 for 4 PCI-e slots, a 1000W PSU, a case and two interconnect cards and cables when I could get a powerful computer for about $550 more, and my throat is raw from Apple's dilly-dallying around about the Mac Pro. Sure, I set forth a barebones configuration, but that's only for those with very limited funds and want an alternative to the barest of bones you'd get from purchasing a PCI-e chassis. So, I make the above suggestion for those willing to consider another alternative. This suggestion I consider to be future-proofing on steroids.

Good gosh, perhaps I'm just old or -- more likely -- find myself constantly challenged by my Mac Pros' performance during mission critical jobs, that the PC alternative is finally beginning to make sense. And the price point advantage I'm finding difficult to ignore.

That link you've thrown up to the Supermicro site has me supremely jealous of those folks that actually love their Windows machines and I realize why so many of them laugh at Mac Pro users. 4 x PCIe 3.0 (double width) slots, 512 GB RAM (could have a higher clock rate), SATA III, a much more efficient and powerful PSU, and a case that isn't entirely unattractive . . . all that's missing is USB 3.0.

At some point I suspect a light bulb is going to go off and we'll find ourselves asking: is the constant struggle to fit the "guts" of a PC into the "OS and industrial design" of a Mac Pro really worth it? I'm certainly getting there (and I'm a HUUGE Apple fan since '97). Apple, convince me otherwise!
 
At some point I suspect a light bulb is going to go off and we'll find ourselves asking: is the constant struggle to fit the "guts" of a PC into the "OS and industrial design" of a Mac Pro really worth it? I'm certainly getting there (and I'm a HUUGE Apple fan since '97). Apple, convince me otherwise!

We're wandering a bit OT here, but for me: it is and always has been about the OS. OS X is a requirement for me because I A)want all of my useful applications and B)want a UNIX underpinning. Linux isn't an option because I can't run my apps on it. Windows isn't an option because, well, I hate Windows.

Give me a UNIX CLI, or give me death!

The obvious answer is: a Hackintosh. But those come with their own list of challenges and issues. And all it takes is one OS X update... *SPLAT* the box no longer works right for some reason. The advantage of them of course is that they're insanely less expensive and you can pretty easily build a faster Hack than you can a Mac Pro.

jas
 
... . Supermicro ... has ... a much more efficient and powerful PSU, ...

Don't forget that you get two of those 1,620 watt PSUs in the $1585 price. A single PSU of that strength can set you back over $350 retail.

Supermicro makes some of the best servers and workstations. They're durable, powerful and efficient, as well as being appropriately designed where it counts most for functionality. That fifth PCI-e slot at the top of the motherboard isn't x16 (so my list does contain an error), but merely x8. Because of it's location with dimm slots in front of it, it's intended to be used for a professional graphics workstation video card - which tend to be compact, i.e., not as long as the more popular cards, but more like the size of GT 120 or GTX 640 4 gig, which is what I would put there. See, e.g., http://www.newegg.com/Professional-Graphics-Cards/SubCategory/ID-449 . In other words, it for a card that isn't going to be relied on all of the time for mainly CUDA tasks, because when being driven hard CUDA can max out a video card so things like screen updates/redraws would become noticeable. That where that fifth card comes in to keep interactivity smooth.

Supermicro systems certainly don't look as appealing from the outside as do Mac Pros. But I don't spend my time looking at my computers' cases just to be looking at them.

We'er all aging, at best. I hope to reach 60 this fall and as I age I become more conscientious about what I get in return for my money. There is nothing that the $2.45K PCI-e computer chassis has inside that the Supermicro doesn't, except for a fast interface to an external computer. Since the Supermicro is, however, a computer it has a lot inside of it that the PCI-e chassis doesn't and all of it's internal connections are just as fast as the external connection of the PCI-e chassis. Moreover, I can install a variety of Sandy (and, later, Ivy) Bridge processors into the SuperMicro, and a lot of memory and storage. That enables me to more easy justify paying $1585 for the Supermicro barebones system, tho' headless and lacking any recall ability for that price, than I can justifying paying $2448 for the PCI-e chassis, which will always be, by itself, headless and never possessing any recall ability.
 
I'd rather be loyal to forum users than to a brand of an uncaring entity.

We're wandering a bit OT here, but for me: it is and always has been about the OS. OS X is a requirement for me because I A)want all of my useful applications and B)want a UNIX underpinning. Linux isn't an option because I can't run my apps on it. Windows isn't an option because, well, I hate Windows.

Give me a UNIX CLI, or give me death!

The obvious answer is: a Hackintosh. But those come with their own list of challenges and issues. And all it takes is one OS X update... *SPLAT* the box no longer works right for some reason. The advantage of them of course is that they're insanely less expensive and you can pretty easily build a faster Hack than you can a Mac Pro.

jas

Across this wide planet that we call earth, we, the lucky ones, are given only a brief moment in time to take it all in. Some people tend to see the differences in others and in things rather than the commonalities. Like that phenomenon, a cult of the Mac took root, but was never really able to sustain itself over the long haul especially because around 2006, the Mac fully became a PC. Some of the people who saw the commonalties became known as Hackintoshers because they wouldn't draw a hard line between Mac and PC. The hackintoshers knew that the "I'm a PC" and "I'm a Mac" line was just a screen to hide the true reality. The most loyal of Mac Pro users can now say, "I'm all Mac," even though they might have Windows installed on their Macs via BootCamp. Even the mere existence of BootCamp evidences Apple's acknowledgment of the commonalities. Only those who are afraid to remove that side panel can say my Mac is 100% pristine Mac. But if the truth be told there are many Macs with owner installed CPU's, video cards, etc. that the "I'm a Mac" guy swapped because someone explained how it's done in an understandable way. In many of those cases that someone who explained how its done, is more than likely someone who is a Hackintosher. A Hackintosher was the first to swap CPUs in the Nehalem Mac Pros. Now lots "I'm a Mac" guys do it even with Westmeres, relying on what that the Hackintosher told others. Who do most "I'm a Mac" guys rely on as the source of their information for video upgrades. Could it be another Hackintosher? Real purist are few and far between because many aren't afraid to change their Mac Pro when they consider that something to be simple to do. But was it that simple for the first person who figured out how to do it correctly? Truly, we all just fall on different points along the same continuum, from the too timid to the too bold. I liken the information transfer process in many ways to a circle, with each step (or dot drawn) in the process being as follows: … PC user nourishes Mac user who then nourishes Hackintosher who was a/is a PC user who then nourish Mac user … ad infinitum. We are not separate entities because we consume one another's experiences.

What does this tale have to do with the Titan you might ask? Everything, for we all feed on one another's experiences. There is no Mac Titan - I'm not saying that there will never be one, but I am saying that as of today there is no Titan that runs on the Mac and yields the same benefits as a Titan running on a Dell, HP or even a self build like my EVGA SR-2. For "I'm a Mac" to know what he or she should expect from a Titan he or she should first know what the Titan already does provide on it's intended platform, namely a Windows PC. What's the most reliable source for that information?

I agree with you on almost ever point you make, even the Hackintosh "SPLAT" one, except that I disagree that talking about Windows and/or PCs is off topic because the Titan, as of now, is a Windows/PC product. I also disagree with, at least, three implicit assumptions, namely, (1) that one who has OSX 10.6.7 installed on a Hackintosh should ever upgrade/update (no upgrade/update = no splat) and there is not a real reason to upgrade just become the majority of people now do it for problem fixes that should have been caught through more rigorous beta testing and iCandy, (2) that discussing the topic that a Titan can work in a 2009 Mac Pro won't logically and necessarily lead to the question of (how many Titans can stand on the head of a needle or) how many can you get to work in or otherwise with a Mac Pro and when someone says I need the power of four Titans that the discussion won't logically and necessarily lead to a discussion of the alternatives and their costs and (3) that one should avoid pointing out to the person with $2448 of bananas draped around his/her neck that there's a $1,585+ 1000 lb. gorilla in the room and that the room that they both occupy is the gorilla's cage. As of now, there's a Titan for mainly Windows, but none for the Mac Pro - so the PC is the gorilla and the cage or the environment is Windows. But to get back directly on point, if you do only Mac Pro, i.e, "You're a Mac" and absolutely nothing else and you need the massive compute potential that only a minimum of, at least, 3 or 4 Titans when ganged together can provide, then you might need to swallow hard and consider Netstor's solutions costing from $2200 to $2448, or Cubix's or Magma's even more expensive PCI-e chassis solutions, to be the cost of doing business in only Apple's world.
 
Last edited:
I like that we can peacefully debate these facts without it devolving into a flame fest.

(1) that one who has OSX 10.6.7 installed on a Hackintosh should ever upgrade/update (no upgrade/update = no splat)

Perhaps, but no upgrade also means no access to better drivers for things. Things like: video drivers, USB 3 drivers, etc. Love it or hate it, OS X's updates bring better drivers and better performance almost without question.

(2) that discussing the topic that a Titan can work in a 2009 Mac Pro won't logically and necessarily lead to the question of (how many Titans can stand on the head of a needle or)

Maybe... Keep reading.

(3) that one should avoid pointing out to the person with $2448 of bananas draped around his/her neck that there's a $1,585+ 1000 lb. gorilla in the room
(rest clipped)

I can't speak for 5050's intentions, only my own. My interest in the Netstor solution didn't include Titans. Instead, I considered the idea of having multiple PC-based GTX680s in the device once the prices started coming down. Adobe's Premiere Pro Next (the next version) will take advantage of multiple GPUs for exports, once it's released in May. This intrigues me.. a lot!.

But, as you pointed out: the cost for the device is ridiculous. Way, way, WAY ridiculous. So it's not going to happen. At least, not on my Mac. :)

jas
 
Last edited:
Even when the trail leads to a seeming dead end, the journey itself teaches.

I like that we can peacefully debate these facts without it devolving into a flame fest.

jas, I only intended to shed light. I love humanity; not the tools that we've created to get work done. I intended no flaming, no glowing, no heating and no smoking. Growing up in B-ham in the first half of the 60's and in the City of Angels in the second half of that decade just taught me a lot about the power of light and the harm that comes from fire. I was just pointing out the fact that many of us have become Hackintoshers, even those who don't want to believe that they are or, at least, have likely taken the first steps down this path, to various degrees and extents out of a desire and/or felt need to keep their PCs, which includes their Mac Pros, up to date.

Perhaps, but no upgrade also means no access to better drivers for things. Things like: video drivers, USB 3 drivers, etc. Love it or hate it, OS X's updates bring better drivers and better performance almost without question.

Love it or hate it, those who go through what is required to get OSX to run smoothly on their hardware of choice can be said to exhibit greater devotion to that OS than any others. I know that I shouldn't draw inflexible lines - never saying "never" is the wisest course - because what I say I'll never do is what I'll have to do next. The people who've had problems with their Hackintoshes splatting on updates/upgrades are mainly those who dabble and don't realize that it requires a commitment to then know much more about (1) their system(s) and other hardware, including the system being emulated, (2) the various OSes and their applications, and (3) what's in each upgrade/update and what its more likely to do once it's installed, than others are tasked with knowing. I love Pacifist [ http://www.charlessoft.com ]. If an update or upgrade has some special driver that I need, I'll just pacify myself by adding that particular driver. My point was that an avowed Hackintosher should not have a casual update/upgrade mentality. A side benefit of understanding what commitments you have to make to be an effective Hackintosher and actually keeping those commitments is that it forces you to continue learning.


I can't speak for 5050's intentions, only my own. My interest in the Netstor solution didn't include Titans. Instead, I considered the idea of having multiple PC-based GTX680s in the device once the prices started coming down. Adobe's Premiere Pro Next (the next version) will take advantage of multiple GPUs for exports, once it's released in May. This intrigues me.. a lot!.

But, as you pointed out: the cost for the device is ridiculous. Way, way, WAY ridiculous. So it's not going to happen. At least, not on my Mac. :)

jas

I understand that the Titan played no role in the conception of the external PCI-e chassis and that there are many reasons why someone may want more PCI-e slots. When it comes to performance those open slots provide an open invitation as points of insertion because of their superior speed advantage. I also understand that various people have differing needs. I've been eyeing the external chassis for many years; in fact, long enough ago that if someone had then asked me, "Is CUDA good to have?" I would've responded, "It's not as good to have as fried Gulf Red Snapper."

Given that the price of the external chassis is ridiculous, it got me to thinking last night why not just come up with a way to connect two or more computers together, just like the external PCI-e chassis connects, to get the same benefit. So I started doing some research.

Here is the first most promising and intriguing lead that I found: http://davidhunt.ie/wp/?p=232. There, David Hunt details how he got two computers connected through an Infiniband Network that he set up at his home, using:

"2 x Mellanox MHEA28-XTC infiniband HCA’s @ $34.99 + shippping = $113 (€85)

1 x 3m Molex SFF-8470 infiniband cable incl shipping = $29 (€22)

Total: $142 (€107)."

Importantly, there David points out that you don't need one of those mega-grand switch boxes to connect just two computers directly to each other with infiniband. So David taught me some things that I didn't already know.

Then, I googled Mellanox and went to their solutions page [ http://www.mellanox.com/page/solutions_overview?gclid=CPf_85_P6LYCFS8OOgodWg4AVA ], where I saw that they provide solutions for "High-performance compute clusters [that] require a high-performance interconnect technology providing high bandwidth, low latency and low CPU overhead resulting in high CPU utilization for the application’s compute operations."

To say the least, this began to intrigue me. So I then clicked on HPC under "Solutions Overview" and here's what, in relevant part, displayed:


High Performance Computing (HPC)

Overview

High-performance computing encompasses advanced computation over parallel processing, enabling faster execution of highly compute intensive tasks such as climate research, molecular modeling, physical simulations, cryptanalysis, geophysical modeling, automotive and aerospace design, financial modeling, data mining and more. High-performance simulations require the most efficient compute platforms. The execution time of a given simulation depends upon many factors such as the number of CPU/GPU cores and their utilization factor and the interconnect performance, efficiency and scalability. ... .

One of HPC’s strengths is the ability to achieve best sustained performance by driving the CPU/GPU performance towards its limits. ... .

By providing low-latency, high-bandwidth, high message rate, transport offload for extremely low CPU overhead, Remote Direct Memory Access (RDMA) and advanced communications offloads, Mellanox interconnect solutions are the most deployed high-speed interconnect for large-scale simulations, replacing proprietary or low-performance solutions. Mellanox's Scalable HPC interconnect solutions are paving the road to Exascale computing by delivering the highest scalability, efficiency and performance for HPC systems today and in the future. Mellanox Scalable HPC solutions are proven and certified for a large variety of market segments, clustering topologies and environments (Linux, Windows). ... .

High-performance computing enables a variety of markets and applications:
... .
EDA
EDA simulations often involve 3D modeling, fluid dynamics, and other compute-intensive processes that require high-performance computing (HPC) data center solutions.
... .
Media and Entertianment
To reduce production lags, today’s media data centers invest in high-performance HPC cluster technology, combining the power of hundreds or thousands of CPUs in the service of a highly complex rendering task.
... ." [Emphasis Added]


Being even more intrigued, I began to examine Mellanox's product lines [ http://www.mellanox.com/page/products_overview ], and in particular their InfiniBand/VPI cards [ http://www.mellanox.com/page/infiniband_cards_overview ] because I then also had another Safari window opened to this [ http://www.newegg.com/Product/Produ...tegory=27&Manufactory=13783&SpeTabStoreType=1 ] and I was eyeing that $556 Mellanox MHQH19B-XTR ConnectX 2 VPI - Network adapter 40Gbps PCI Express 2.0 x8 card [ http://www.newegg.com/Product/Product.aspx?Item=N82E16833736004 ] because of it's relatively low price and the similarity of it's specs to those of the NA250A. Then, when I clicked on ConnectX-2 VPI link at the Mellanox site on this page [ http://www.mellanox.com/page/infiniband_cards_overview ], I was taken here: [ http://www.mellanox.com/page/products_dyn?product_family=61&mtag=connectx_2_vpi ]. There, I downloaded this: [ http://www.mellanox.com/related-docs/user_manuals/ConnectX 2_VPI_UserManual.pdf ]. Then, I struck oil, well not really oil but something sticky, murky and obscure like oil. At section 4.4 of the user manual the subtitle is "NVIDIA GPUDirect Support," and provides as follows:

4.4 NVIDIA GPUDirect Support
Utilizing the high computational power of the Graphics Processing Unit (GPU), the GPU-to-GPU method has proven valuable in various areas of science and technology. Mellanox ConnectX-2 based adapter card provides the required high throughput and low latency for GPU-to-GPU communications.
4.4.1 Hardware and Software Requirements

Software:
Operating Systems:
• [Red Hat Enterprise Linux] 5.4 2.6.18-164.el5 x86_64 or later
• Mellanox OFED with GPUDirect support
• NVIDIA Development Driver for Linux version 195.36.15 or later

Hardware:
• Mellanox ConnectX-2 adapter card
• NVIDIA Tesla series.

So for 2 x $556, plus the cabling cost, one can connect two computers, each with Nvidia Tesla cards and the Teslas cards in both systems would be able to operate as if they were in the same computer. That sounded a lot less ridiculous than $2,448.

Then, my feeling of elation began to evaporate because it dawned on me that I had written in post #564 here: https://forums.macrumors.com/threads/1333421/, about some similarities and differences between the Titan and the Tesla cards: and here's the salient part:
... .
"In Titan, certain other Tesla card features have been disabled (i.e., the Titan drivers don't activate them).

RDMA for GPU Direct is a Tesla feature that enables a direct path of communication between the Tesla GPU and another or peer device on the PCI-E buss of your computer, without CPU intervention. Device drivers can enable this functionality with a wide range of hardware devices. For instance, the Tesla card can be allowed to communicate directly with your Mercury Accelsior card without getting your Xeon or i7 involved. Titan does not support the RDMA for GPU Direct feature." [Emphasis Added]

Well, at the least, I learned more than I new before following this trail and Mac Pro users with the higher priced Tesla cards in two systems have an alternative connect solution, if they don't mind having to install and working with RHE Linux. Like I said earlier, "PC user nourishes Mac user who then nourishes Hackintosher who was a/is a PC user who then nourish Mac user … ad infinitum." I don't ignore what I've learned and hopefully I will continue to learn much more about Macs and PCs and long may I own systems of both categories, as well as my Ataris and Commodores. But for as long as I am blessed to breathe, to me they shall be only inanimate tools, not deserving of any of my allegiance.

BTW - I just remembered some more vital pieces of information. gnif believes that the Titan has locked within it the capability to do everything that a Tesla K20X does [ http://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/ ] (and I tend to believe that he is right). Also, I know that the Titan is very tweakable and can be made to yield great performance. Might not the Tesla drivers that enable Nvidia GPUDirect/RDMA support be abled to be installed for the Titan by causing the installation program to believe that the Titan is a Tesla card by modifying the resisters on the back of the Titan card to change its PCI Device ID byte. To me this hack sounds a little like the Mac Pro 4.1 to 5.1 EFI hack, except that this approach involves modifying the hardware itself which carries its own special risks. Then one could selectively install the driver at issue. That sounds like a job where Pacifist could help because it allows for selective installs. This might require giving the Titan back its own id by replacing the resisters as before. In other words, you wouldn't want the Titan to lose it's own unique features in the process; but just add certain Tesla features that are now locked away. Thus, if the lack of Nvidia GPUDirect/RDMA support is due solely to a driver installation issue, then the Mellanox ConnectX-2 adapter card solution would then be available for the owners of Titans housed in two of their separate computer systems if they are willing to hack there video cards like gnif does.
 
Last edited:
Just to clear things up, a PSU doesn't have a 100% efficiency, some of the power is lost as heat.

They are usually in the 80-92% if very high quality.
 
I agree

As stated in response to the other thread, the card is NOT compliant with a Mac Pro limit of 225w on a single PCI-E slot with 75 from the slot and 150 from the two plugs. The Titan has a stated draw of 250 and has tested actually higher than that. 271 in one review at peak usage. You are risking cooking something. Try running a GPU stress test at 100% for 24 hours or render a 30 minute multilayer 1080p60 HD video with numerous effects and let us know how it goes.

There is more than one horror story from quite sophisticated Mac Pro users who don't think what you are doing is a safe bet. You may get away with it for a period of time and then...the smell of smoke. I did it over two decades ago on an original IBM PC. You don't forget it.

10.8.4 has a driver that will enable the Titan, but that doesn't mean hardware running 10.8.4 will power it. I would love to have one, but not at the risk of frying a 5,000 USD Mac Pro. Multiple huge hard drives/SSD's, a CPU swap for the 3690, a 670 SC, and additional upgrades/mods were acceptable. I'm not convinced this one is without additional power.

Modern GPU have several functioning states including a full power mode and a low energy saving mode. I guess that with 6 to 8 pin adaptors, tha card will probably stay in the low energy saving mode, thus precluding high performances during highly demanding games. This is good since otherwise, you are risking cooking something...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.