Read the excellent post Step-by-Step Instructions for Flashing GTX680, and you are golden.Thanks for your informations. I will did flash in next days
You can set the clocks back up using Kepler BIOS Tweaker in Windows.
would this card work.. ?
I think it would...
https://www.amazon.com/MSI-N680GTX-...e=UTF8&qid=1371469552&sr=8-7&keywords=GTX+680
Your card is very similar, maybe identical, to my original eVGA GTX 680 Mac Edition. Nice to know that this one works.Gigabyte Reference GTX680 aka "lightsaber edition" => GV-N680D5-2GD-B*
* https://www.gigabyte.com/us/Graphics-Card/GV-N680D5-2GD-B#ov
HW: MacPro 5.1 mid2010
SOFT: NVflash v5.134 => download: https://forums.macrumors.com/thread...-rates-pci-e-2-0-5gt-s.1603260/#post-17504407
FW: EVGA 2GB => download:
https://forums.macrumors.com/thread...-680-mac-edition.1565735/page-5#post-17132316
INSTRUCTION: step-by-step => https://forums.macrumors.com/thread...-pci-e-2-0-5gt-s.1603260/page-4#post-17832428
I also confirm that the Gigabyte GV-N680D5-2GD-B works great ! *boot screen, Mojave, etc.
+ https://forums.macrumors.com/thread...e-gtx-680-models.1578255/page-2#post-17501849
I have flashed my new Gainward GeForce® GTX 680 2GB today, and work just fine
I'm all heapy now with a working bootscreen again, makes live easier sometimes ...
Anyway, I was wondering how i can read the GPU die within MacOS ?
within Bootcamp it's easy, i just ran GPU-Z for example, and it gives me all the temps. I want to see.
unfortunately istat menu's, nor Macsfancontrol cannot read the GPU so far..
Anybody got a fix for me.. ?
My Gainward GTX680 can run a little hot, I think due to very low GPU fan speed control.
the fans also ramped up at arround 78 degrees, in My opinion a little late which will slowly bake the GPU over time..
Maybay tools like kepler bios editors can fix a fan speed or something ?
How can I fix this, I dont know what the best way is for me.. ?
When I run Macsfancontrol in my 5.1, there's a reading for PCIe Ambient. Boost A is for the PCIe cooling, so try changing this to a higher rpm to reduce the GPU temp. The Ambient temp will rise if the card is running hotter. There's no Mac app I know of to monitor GPU temps unfortunately, so this would be a good workaround. Maybe one of the wizards on here can offer a better solution?
Thanks skizzo for that correction... Evacuating the heat here is key!This is a good solution but you suggested the wrong fan. If PCIe Ambient temp is desired to run lower, then the "PCI" fan needs to be ramped up. It is the first fan at the top of the list. "Boost A" fan is the fan in the CPU heatsink. Ramping up Boost A fan will lower CPU temps, and likely the IOH temps.
I am puzzled, my first GTX680 was an ASUS Direct CU2 TOP, flashed with EVGA Mac Edition bios and clock speed corrected with Kepler tweaker, the speed reported by luxmark was 1411 MHz, my thought was because the difference between the card an the bios, but now I purchased a genuine EVGA one, the same layout as the Mac version, flashed with the original bios and LuxMark still reports 1411 MHz instead of 1058 MHz as it should be...
My MacPro is a 4.1 upgraded to 5.1, with Mojave 10.14.3 installed. Am I missing something? No one here seems to have reported this before, could be an issue with 4.1 MacPros only?
my flashed Gainward GTX680 also shows 1411mhz in Luxmark.
running MacPro 4,1-5,1 flashed here with Mojave 10.14.3 aswell.
it should be 1058mhz i think, according to spec. list from EVGA.
should we worry since our core is running at 1411mhz all the time.. ?
or is this just a temp. speedboost when starting luxmark for example.. ?
Ok, sorry for the delay in the answer but now I have verified what is really happening.
The OpenCL framework is correct in reporting the GPU speeds until OSX Sierra, starting with High Sierra and Mojave the reported speed is totally wrong, when you see on eBay and everywhere on the Net (but also here in various thread) the screenshots of LuxMark reporting the right speed you can bet you are seeing a MacPro running OSX lesser or equal to 10.12.6
If you have an nVidia GPU and 10.13 installed you can read the correct clock speed using the nVidia Web drivers and CUDA-Z but since it polls the nVidia drivers (and not the OpenCL framework) the reported speed is real and correct.
Of course the effective speed of the graphic card is what is written on the firmware and not in the OpenCL (or OpenGL) drivers. If you are seeing poor performance on CineBench, for example, keep in mind that Apple's OpenGL is stopped at 4.2 even on recent cards, for this reason only Metal benchmarks have meaning from High Sierra on.