has anyone gotten a nvidia card to display the actual amount of ram on the card in the system profiler(and thus the system itself), ESPECIALLY if its greater than 1024mb, like 2.5gb, or 3gb?
Back on topic, if anyone knows something, even as cosmetic as editing a .plist to get the system(thus the profiler too) to recognize, OR BE TOLD MANUALLY how much memory the card has, it would be most helpful. Thanks for any help anyone can offer
EDIT: in addition, I tested a GTX 570, compared to a mac edition radeon 5770, and the radeon destroyed the GTX in cinebench :-( . the 5770 was usually 50-95% faster in cinebench than the GTX 570 with greater than 1024mb or vram.
Running Cinebench only tells you how well a card will run Cinebench. I don't think there are any paying jobs that require running Cinebench.
I have found that almost any Nvidia card with decent bandwidth gets same score in Cinebench, which hints at some Nvidia driver limitation. An ATI 4870 from years back will outscore any Nvidia card I have tried. But then I haven't tried in awhile because I don't get paid to run Cinebench.
There is a simple way to make ATY_INit report whatever amount of RAM you want. It's in a plist inside, and the default is 512 Megs. The field is called VRAM,total size and the string value is set to 20. If you set it to 40, it will read out 1024 Megs. From that I think any person bright enough to remember to breath can figure out how to put whatever number they want in there. (You will have to think a little, and maybe use "Magic Number Machine")
If my friend Netkas is upset that I posted this, I will happily remove it. He was working on having this auto detect but don't think he got it done before other pressing issues came up. (his life, etc)
As far as benchmarks go, you need to keep in mind what your apps use in terms of rendering and what they need to run well. CUDA-Z is accurate for predicting CUDA performance. Resolve users have found that the "Single Precision Float" field pretty well correlates to Resolve performance.
OpenGL View gives a reading that is largely dependent on memory bandwidth. This allows an older high performance card like a G80 based 8800 Ultra to get really good numbers while not giving away that it is a 5 year old card. I am referring to the "legacy" fields, I haven't played with the OpenGl 3.0+ parts enough to know what they signify.
I recently discovered that a 9400GT with half of the rendering pipes of a 9500GT could score identically in the legacy part of GLView. It has same G96 chip, and same memory interface and speed. So, in CUDA-Z it got half of the 9500s score. I would argue that CUDA-Z is therefore more accurate predictor of real world use. I didn't try it with Cinebench.
I always get a laugh when people complain about a new card getting poor scores in X-Bench. It was last updated when George W was in office and is as accurate as throwing each card on a golf course driving range and seeing which one flies furthest.
Figure out what you need a card to do and bench it with something that measures that ability.