Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
My PNY card flashed with EVGA's BIOS is detected at 1.1 speeds in GPU-Z and it was the performance of this card that I was referring to. Sorry for the confusion.

For a comparison, do you have Windows-based Heaven benchmark scores for any of your flashed cards?

I believe you are in error if you are basing this on GPu-z and you haven't run the little warm up app that is part of it. It is a little render scene that will kick it up
 
You have my apologies. It's running in PCIe 2.0 after all. I don't know how I missed that but thank you for pointing it out.

It turns out that there are 2 components to seeing "Link Speed 5.0 GT/S" in Apple System Profiler. There is a functional aspect, and a cosmetic one. I'm not sure if Nvidia did this out of necessity or as a means to slow down card flashers.

The way you see it function in Windows is the correct way, the card stays in 1.0 until it is called upon, at which point it switches into 2.0, running faster and using more power.

In OSX it functions same way, so it is possible to have System Profiler report "Link Speed 2.5GT/s" but have the card function perfectly and switch to 5.0 on demand, as it does in Windows.

What is in System Profiler is not as important as what you see running CUDA-Z or The OpenCl test that includes bandwidth. The BEST method is to Google "lspci" and install it. Then run it using "-vv" and you will get a listing of everything on PCI bus, and it will show what link speed is enabled and what is running.
 
Getting back to the benchmarking question, a couple of observations.

First, the Heaven benchmark is meant for Mountain Lion only under OS X. I don't know how much difference this might make but it could be very significant. If you're looking at the scores in Lion, then perhaps it would be more appropriate for everyone to run it under Windows?

Secondly, when I ran it under Windows, I noticed that the details in the scenes are far superior to those in OS X. The cobbles and brickwork; the ropes, conduits and screws on the gun carriages are all an order of magnitude more complex. It makes comparisons across operating systems very misleading since the OS X benchmark would seem to be a comparatively easy task.

My Mac Pro is a 4,1/5,1 hybrid with a 3.33 GHz hex, 24 GB of RAM and a GTX 680.

Here is the result of the extreme preset (1600 x 900, windowed, 8 x AA) under 10.8.3:

Image

Under Windows 7 64-bit, using the same preset with the OpenGL renderer, I get this:

Image

And with the DX11 renderer, here's the result:

Image

I would guess that another reason for the drop in performance is that Windows 7 runs flashed cards at PCIe 1.1 speeds but it's apparent from the frame rates that the Windows version of the benchmark is giving the graphics card a much tougher job to do. If this is the case, then differences between the hardware that is feeding data to the cards should make less difference than the benchmarks under OS X.

*edit* It's been pointed out that the lack of tessellation in OS X's implementation of OpenGL is the real reason.

Are any of you guys with 2,1 and 3,1 Mac Pros able to check this out under Windows?


You should use Unigine Valley instead of Heaven for cross platform benchmarks. It uses OpenGL 3.2 Core profile on OSX, Windows, and Linux.
 
I just upgraded the CPUs in my Mac Pro 5,1 from 2.40GHz to 2.93GHz. Below are before and after benchmarks. Everything else in the system was unchanged.
 

Attachments

  • Untitled1.png
    Untitled1.png
    327.1 KB · Views: 74
  • Untitled2.png
    Untitled2.png
    337.1 KB · Views: 82
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.