Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
As there seemed to be doubt as to whether Netkas little trick still works....I tossed in a GTX285 yesterday and gave it a shot.

Works fine...makes COD4 playable at the same "High" settings that an 8800GT CHOKES on.

The RAM shows up with correct amount because I patched the patch.

GTX285 running on a First Gen. Nice card and a nice trick, courtesy our dear friend Netkas.

Seriously, how on earth does an 8800GT choke on CoD4? Is the port that bad? Or are Apple drivers that pathetic?
 
The next step

Thanks for re-uploading the injector Netkas. :)

I have installed the injector and the evga drivers, my 260gtx is in slot 1 and my 2600xt mac edition is in slot 3, after finishing the installation I rebooted, I had the OS X loading screen as I guess the 2600xt was outputting it, but then both displays went solid white with no mouse cursor, I left it up for about a minute but no change.

I have also tried to boot with just the 260gtx after installing the injector and drivers but no display what so ever using this configuration event though I could still hear the usual boot process going on.

Is there any way around me needing a EFI NVIDIA card? After all, should this not just be a matter of installing the necessary software?
 
Is there any way around me needing a EFI NVIDIA card? After all, should this not just be a matter of installing the necessary software?

AFAIK, none of us have managed this on a Mac Pro without a proper Nvidia Mac card to boot. There were some posts early on that suggested one might be able to get something called Refit and run a script, but I never got anywhere with it. You could look at the Hackint0sh forum for clues, but this was beyond me!
 
Thanks 10THzMac, I guess I will have to keep a eye out for a reasonably priced nVidia card, or alternatively try to do a swap with someone for my 2600xt.
 
You need another Nvidia card with working driver. (ie, working EFI ROM)

Hey Rominator, I have a few questions for ya. I have a Mac Pro 2,1 and need to upgrade my system to a GTX 285. I am a Post Production Professional and am planning on building a DaVinci Resolve suite out of my Mac Pro. The DaVinci Resolve heavily relies on CUDA processing for all of it's rendering and image processing, and currently requires the GTX 285 which obviously natively won't run on my system.
My questions are these....Is there a performance hit running an injected GTX 285 in a PCIe 1.0 slot vs. a native card running in a PCIe 2.0 slot? Video game and OpenGL benchmarks are one thing but I need to rely on it's ability to handle high end CUDA computations on a day to day pro level. My other question is just that...will my Pro Apps run with out glitch or hiccup....Final Cut Pro, Apple Color, Motion, After Effect CS5, Maya 2010, Cinema4D, and DaVinci Resolve? All of which rely on GPUs in some form. My last question is a matter of space....I need to be able to run my AJA Kona 3 card in slot 4, a Raid card in slot 3 or 2, the helper card (8800GT), and a Nvidia GTX 285. Isn't the 285 a dual wide card? I don't think I'm going to have the space. It is sounding more and more like I just need to upgrade my system. I know as soon as I do a new Mac Pro will come out :(.
Any help would be greatly appreciated.
 
I don't know answers to all of your questions as my job is in live production, not post.

But I am pretty sure that lowest slot is double wide. This means you can use 4 TOTAL cards in there.

8800GT Helper has to be in a 4X slot, not sure about your other cards.

If you are in LA, bring me a GTX285 and a six of Stella, i'll do the mod so it will work in future Mac Pros natively.
 
I don't know answers to all of your questions as my job is in live production, not post.

But I am pretty sure that lowest slot is double wide. This means you can use 4 TOTAL cards in there.

8800GT Helper has to be in a 4X slot, not sure about your other cards.

If you are in LA, bring me a GTX285 and a six of Stella, i'll do the mod so it will work in future Mac Pros natively.

Thanks for the help and answers. Several of the cards are already fairly large and I'm afraid they would touch. When I can swing the money to get the cards I may take you up on your offer! So your saying beyond the usual injection process on that wiki, you know of a process to make it run on future Mac Pros natively as well? Very very cool. I would really love to see some benchmark comparisons of an injected GTX 285 in an early PCIe 1.0 MacPro vs. a current PCIe 2.0 running a native Mac GTX 285. The Pro App Benchmarks and CUDA performance comparisons I would be really interested to see.
 
I don't know answers to all of your questions as my job is in live production, not post.

But I am pretty sure that lowest slot is double wide. This means you can use 4 TOTAL cards in there.

8800GT Helper has to be in a 4X slot, not sure about your other cards.

If you are in LA, bring me a GTX285 and a six of Stella, i'll do the mod so it will work in future Mac Pros natively.

Hey Rominator....another quick question. I heard some people reported sluggishness in typical tasks of OS X...like Expose stuttering and different aspects of Core graphics being a little slow. Have you seen any of this? I switched from my ATI X1900 XT to a 8800GT a year or so ago and was not impressed. I actually returned it because the GUI of OS X was running smoother and faster and I was getting better performance in my Pro Apps with the ATI X1900 XT. Granted that was before SL. Any thoughts? I think with the DaVinci Resolve I will have to use the 8800GT card in the 4X slot to run my monitors because the Resolve requires 2 cards...one for the monitors and one strictly for the CUDA processing. Hmmm....I may have much slower general gfx in everything else....:( Sounding more and more like I will need to upgrade my machine.
 
I do believe that CUDA can use combined aggregate of cores...ie, may do you some good to have both 8800 and GTX285.

I totally agree it will be an awesome config and thats what I plan to run. But I think the Resolve is anal and wants you to run your monitors on a seperate card then the intended CUDA power house. Any thoughts on the performance of an 8800GT in a 4x slot? I would like to see someone run that and some benchmarks. From what I can see online some PC users have done it and said they had no issues with games like COD4. Hopefully the 8800GT doesn't max out that bandwidth and would be snappy enough for my uses in After Effects, Maya, and general OS X operations in that 4x slot? Thoughts?
 
I don't know answers to all of your questions as my job is in live production, not post.

But I am pretty sure that lowest slot is double wide. This means you can use 4 TOTAL cards in there.

8800GT Helper has to be in a 4X slot, not sure about your other cards.

If you are in LA, bring me a GTX285 and a six of Stella, i'll do the mod so it will work in future Mac Pros natively.

Stella

GoodMan:cool:
 
I do believe that CUDA can use combined aggregate of cores...ie, may do you some good to have both 8800 and GTX285.

Rominator,
In that config, 8800GT and GTX285, the 8800GT is in a 4X slot or 8X slot? In a Mac Pro 2,1. What sort of performance hit would you have if I had to only count on the 8800GT in that config?
 
CUDA and OpenCL are both capable of using multi GPUs and there is a nice example MonteCarloMultiGPU that illustrates the coding to farm a simulation across more than one card. Whether a particular CUDA-aware app has been written to utilise more than one GPU is however another matter. An app might use the first card it finds (device 0) and just use that, or be programmed to find all cards and distribute the load.

It's a slightly oblique answer to your other question but when running various tests on (a) a native Mac 285 and (b) and injected PC 285, I found that while the on-card calculations took the same time, some of the CPU-GPU copy routines were slower on the injected card. I speculated that this was to do with the slower link speed (2.5 vs 5) but did not come to a firm view. This might extend to issues about the PCI type. Of course if the computation time is dominated by on-card sums it will be mostly irrelevant.

I have ordered a Zotac GTX 480 and will report back on that one! I do not expect any life expect under bootcamp, but will wee what happens at least.
 
CUDA and OpenCL are both capable of using multi GPUs and there is a nice example MonteCarloMultiGPU that illustrates the coding to farm a simulation across more than one card. Whether a particular CUDA-aware app has been written to utilise more than one GPU is however another matter. An app might use the first card it finds (device 0) and just use that, or be programmed to find all cards and distribute the load.

It's a slightly oblique answer to your other question but when running various tests on (a) a native Mac 285 and (b) and injected PC 285, I found that while the on-card calculations took the same time, some of the CPU-GPU copy routines were slower on the injected card. I speculated that this was to do with the slower link speed (2.5 vs 5) but did not come to a firm view. This might extend to issues about the PCI type. Of course if the computation time is dominated by on-card sums it will be mostly irrelevant.

I have ordered a Zotac GTX 480 and will report back on that one! I do not expect any life expect under bootcamp, but will wee what happens at least.

Thanks for the response! I think your right, I'm hoping the 1.0 PCIe vs 2.0 PCIe doesn't hurt me performance wise with DaVinci Resolve. I think your right about if the computations are done mainly on the the card side it hopefully shouldn't be an issue. Injecting a GTX285 with a 8800GT piggyback is a far cheaper option then an entire new Mac Pro with new Video cards.
Do let us know about your luck with the GTX 480!
 
I do believe that CUDA can use combined aggregate of cores...ie, may do you some good to have both 8800 and GTX285.

It can, IF the programmer has specifically coded for this. It isn't an automatic feature of CUDA, and can take some work.

(For reference: I've done multi GPU CUDA programming.)
 
I do believe that CUDA can use combined aggregate of cores...ie, may do you some good to have both 8800 and GTX285.

Hey Rominator,
Any chance you could do me a huge favor? Pull the GTX 285 out and leave the 8800GT in the second slot and run Xbench for me and post your results? You have a Mac Pro 1,1 or 2,1 right? I want to see what sort of performance one could get off the 8800GT in the slower slot. Thanks.
 
I'm interested in building a Resolve OS X system too. It's going to be interesting to see how to people squeeze in:

1) A "normal" display card to drive two monitors.
2) An Nvidia card (GTX 285?) sans monitors solely for Resolve.
3) Some kind of storage HBA.
4) A video I/O card (Decklink etc.)

I'll be waiting for more video I/O options and FCP XML to jump in.
 
I do believe that CUDA can use combined aggregate of cores...ie, may do you some good to have both 8800 and GTX285.

Rominator

Any chance I could get you to benchmark your CUDA scores with your current 8800 and GTX285 setup? Then open the Expansion utility and pick the bottom option that switches all the slots to 2 4x and 2 8x. Then run the benchmark again and post them. That would really be the deciding factor for me if I can make this work with all the cards I need to have in my system. Thanks for your time and expertise man.
 
Rominator

Any chance I could get you to benchmark your CUDA scores with your current 8800 and GTX285 setup? Then open the Expansion utility and pick the bottom option that switches all the slots to 2 4x and 2 8x. Then run the benchmark again and post them. That would really be the deciding factor for me if I can make this work with all the cards I need to have in my system. Thanks for your time and expertise man.

missed this post.

point me to whichever CUDA benchmark you are familiar with and I will give it a shot
 
On CUDA benchmarks. I know Rominator was asked the question, but I also had these to hand. Mac Pro 3,1 (08) with one GTX 285 Mac edition, one injected PC 285 with 2GB Ram. OS is 10.6.3 with CUDA 3.0 release version.The PCI is 16x in both cases but the Link speed is 5 GT/s for the Mac card and 2.5 for the injected one. Most of the time you can see very little difference between these two cards. For example:

bash-3.2$ ./MonteCarloMultiGPU
… bumpf deleted…

GPU #0
Options : 128
Simulation paths: 262144
Time (ms.) : 1.538000
Options per sec.: 83224.968161
GPU #1
Options : 128
Simulation paths: 262144
Time (ms.) : 1.643000
Options per sec.: 77906.268704

and if just the Mac card is up it turns in >100k O/s (there is an overhead in going to multi GPU it seems). Likewise with the bandwidthtest code. But here and there you can see a more substantial difference. The most dramatic is with the BlackScholes code. Here it is run twice targeted at each GPU: I have deleted some irrelevant bits of the output.

**Here is the Mac card:

bash-3.2$ ./BlackScholes --device=0

Using CUDA device [0]: GeForce GTX 285

Executing Black-Scholes GPU kernel (512 iterations)...
Options count : 8000000
BlackScholesGPU() time : 0.698568 msec
Effective memory bandwidth: 114.519933 GB/s
Gigaoptions per second : 11.451993

BlackScholes, Throughput = 11.4520 GOptions/s, Time = 0.00070 s, Size = 8000000 options, NumDevsUsed = 1, Workgroup = 128

**Here is the injected card

bash-3.2$ ./BlackScholes --device=1

Using CUDA device [1]: GeForce GTX 285

Executing Black-Scholes GPU kernel (512 iterations)...
Options count : 8000000
BlackScholesGPU() time : 1.021953 msec
Effective memory bandwidth: 78.281478 GB/s
Gigaoptions per second : 7.828148

BlackScholes, Throughput = 7.8281 GOptions/s, Time = 0.00102 s, Size = 8000000 options, NumDevsUsed = 1, Workgroup = 128

Now I have not turned down the PCI multiplier (I do not know how) and no longer have an 8800 in the machine, but maybe Rominator could run these same codes? Almost all of the time the injected card performs very similarly though. E.g. both give about 480 Gflops on the nbody simulation.
 
anyone know if you need to have the EFI ROM if you're just planning on using a second card as an additional CUDA/OpenCL processor? I'm debating getting another GTX 285 (vanilla PC part) to just use as a helper for Octane Render (CUDA multi-GPU renderer) and I don't care about using it for another screen. My dual monitor setup works fine off the GTX 285 and it would be nice if I could offload jobs to the second card when I need to use the machine while rendering (CUDA rendering lags the UI very badly).
 
10THzMac - you'd probably see the difference in speed in Mudbox, where it's very bandwidth heavy. A while ago, I hoped to upgrade to a flashed 1GB Radeon but the bandwidth cost was noticeable while working on the 2.5 GT/s speed.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.