Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Hey Lex,

as mentioned, as a Cinema4D-artist I have to find a measure directly related to Cinema4D.

In case a 6870 beats a 7950 in Cinebench it might be the case that it also beats the 7950 in OGL-use of Cinema4D. So no matter if thats logical or not it should give a prediction of the cards performance under Cinema4D as Cinebench uses the same core as Cinema4D.

peter

Hi Pete,

this is the GTX 770 2GB un-flashed GPU performance in my cMP running in mavericks with latest NVIDIA web drivers. will try yosemite with OSX drivers to see if any change but don't imagine much change... nMP smashes it with those D700's ...

 
weirdly in Yosemite on default OS X drivers it improves by 3fps but still way off a dual D700 set up.

 
Sorry but this is totally wrong.
C4D do use the latest OpenGL version, you are probably just looking at minimum requirement.

Indeed, thanks for correcting. It's never too late for learning something. ;)

Aside single threaded CPU performance which determines CB scores, does C4D not benefit from extra VRAM? I mean, that even while you're getting the same FPS from let's say 5870 and 7950, more complex scenes should "run" smoother in the viewport on the 7950.
 
Indeed, thanks for correcting. It's never too late for learning something. ;)

Aside single threaded CPU performance which determines CB scores, does C4D not benefit from extra VRAM? I mean, that even while you're getting the same FPS from let's say 5870 and 7950, more complex scenes should "run" smoother in the viewport on the 7950.
Absolutely:)
1GB is more than sufficient to run the Cinebench scene so it's very unlikely you see speed improvement just by increasing the GPU VRAM. That scene if I remember correctly was about 250.000 polygons with only a few textures, you will never reach VRAM limits with that(for example my old 5770 can handle about 10.000.000 polys "smoothly" enough).
Of course more VRAM will come in play when you raise the number of polygons/textures, in that case I suspect the 6GB on my D700 are more than useful considered that normally I have to display from 10 to 200millions polygons.
 
will try yosemite with OSX drivers to see if any change but don't imagine much change... nMP smashes it with those newer CPUs ...

Fixed that for you. The Ivy Bridge-E CPUs in the nMP are many generations ahead of the cMP's Nehalem/Westmere CPU architecture, and are massively faster even at similar clock speeds.
 
Fixed that for you. The Ivy Bridge-E CPUs in the nMP are many generations ahead of the cMP's Nehalem/Westmere CPU architecture, and are massively faster even at similar clock speeds.

Heres my nMP with D700's. Note how the CPU score under OSX is higher than Windows, but the OpenGL under Windows is better.

Image



Ah right... Just that the nMP scores lower on the CPU rendering than mine but the OpenGL score is through the roof...
 
Fixed that for you. The Ivy Bridge-E CPUs in the nMP are many generations ahead of the cMP's Nehalem/Westmere CPU architecture, and are massively faster even at similar clock speeds.
Well, while its sure that faster single threat performance will help your viewport performance I suspect ther's also some more driver optimization involved for the DXXX GPUs. I mean, the IB CPU are about 30/40% faster in single core than older MP(from 90/100 to 125/140points) while the OGL test shows almost double the performance(from 40/50 to 80/90FPS).
Of course even though the DXXX are a solid option and significantly faster than other MP cards in C4D they are still slow if compared to GPUs feeded by overclocked desktop CPUs.
 
Well, while its sure that faster single threat performance will help your viewport performance I suspect ther's also some more driver optimization involved for the DXXX GPUs. I mean, the IB CPU are about 30/40% faster in single core than older MP(from 90/100 to 125/140points) while the OGL test shows almost double the performance(from 40/50 to 80/90FPS).
Of course even though the DXXX are a solid option and significantly faster than other MP cards in C4D they are still slow if compared to GPUs feeded by overclocked desktop CPUs.

Memory is much faster (1866MHz vs 1066MHz) as well, so it's not as simple as just comparing CPU perf in some other benchmark. As always, you can just grab the OpenGL Driver Monitor from the Apple developer website and enable the stats that track GPU utilization. I'd be surprised if the GPUs are running at 100%.
 
I like the OGL-Scores of the nMP.
But on the CPU-side I love my nearly 1400 points.
To reach them with the nMP a few thousand bucks are on the loose (and still its only 12core... ;-/ )

i quite like the 1600 points :eek: those dual 3.46's are screamers
 
I know Cinebench relates the GPU-power to the single-core power. But I also know a 5870 scores higher than a 5770 or a 4870 or GT120. Thats one point I tried to make clear in my initial post: I do not want to start a discussion about Cinebench.
I just need some plain simple scores (e.g. as provided by cbscores.com). Thats all... :)

Thank you
peter

Here're the Cinebench scores for one of my three old (purchased circa ~ late 2009) PowerColor HD 5970's [ http://www.powercolor.com/us/products_features.asp?id=210#Specification ] in Windows. When that card was installed in my 2009 -> 2010 MacPro in 2012, then running OSX 7/Lion, it scored 49 points (in CB 11.5) because I couldn't then get that OS to recognize both of the GPU's processors:
CPU - 4xE5 4650; Cores - 32; Threads - 64; GHz - 2.70; Graphics Card - Radeon 5900; OS - Windows 2008; Single - 121; Render - 3,791; OpenGL - 99 [ http://cbscores.com/ ]
 
Last edited:
Cinema4D is everything else but poorly written, trust me.
Interesting thing is that on Windows a stronger card also means a better OGL-result, just look at http://cbscores.com/index.php?sort=ogl&order=desc. So to my mind the graphics-card-drivers for Mac are poorly written or Mac OS X is the bottle-neck. Feels familiar somehow... :-/

I agree with you whole heartedly that Cinema 4d (C4d) isn't poorly written [and neither is Cinebench]. I've used C4d user since 1991 when it was called "Fast Ray,” running it on my Commodore Amigas. In 1993, Fast Ray became Cinema 4d. I fully appreciate the relationship between the benchmark and the application as a true indicator of what one should expect in C4d based on one's benchmark score. I also use Cinebench (and Geekbench) to help me to outfit and to tune my builds. I find nothing distasteful about Maxon's creating a benchmark for users of it's software. Now, that Cinebench has gained wider acceptance is a tribute to Maxon's foresight. That same type of use expansion has begun to occur with Otoy's OctaneRender benchmark utility.

I have no doubt that higher clocked CPU speeds affect the OpenGL scores a little and that cannot be avoided completely because of the CPU's role in doing what Cinebench's OpenCL test has to do. However, one should not forget that overclocking a CPU also usually overclocks (a) the memory speed [unless the memory is downclocked in bios] (and that affects the OpenGL score a little because it plays a role in what the CPU does while that test is being run; so it's not just higher CPU overclocking) and (b) the QPI, which we used to (and sometimes still) refer to as the "bus speed." Increasing the bus speed has a slight impact. Also, depending on a Windows system's bios, users with Nehalem Xeons, Westmere Xeons and i7 CPUs (Nehalem, Westmere and the K versions since the intro of Sandy Bridge), are usually able to manipulate one or more of these variable independently.

Maxon hasn’t tried to dupe anyone. All one has to do is throughly read, “CINEBENCH R15 TECHNICAL INFORMATION AND FAQ” [ http://www.maxon.net/pt/products/cinebench/technical-information.html ], where it states, in relevant part:

1) “To prevent the scene being displayed much too slowly on old graphics cards or much too fast on the latest hardware, CINEBENCH estimates the graphics card performance so the scene will maintain a consistent duration (approximately 30 seconds). Faster graphics cards will display the scene much smoother than slower ones. If a graphics card can display a higher frame count than the original scene speed, subframes will be displayed and properly measured.”

2) “Graphics card performance as measured by CINEBENCH reflects the power of the graphics card in combination with the system as a whole. Unfortunately the system contribution cannot be specifically measured. The same graphics card in a faster computer will typically give better results than in a slower system. The overall performance depends on various factors including processor, memory bus and chipset.

The graphics benchmark in CINEBENCH is designed to minimize the influence of other system components. All geometry, shaders and textures are stored on the graphics card prior to measurement, and no code is loaded during the measurement process. This minimizes the system influence, but unfortunately cannot eliminate it entirely.”

3) “Graphics card drivers can greatly affect the benchmarking performance.”
 
Last edited:
This Hackintoshed i7 with GTX770 does rather well in the OpenGL test, not so good on the CPU:
attachment.php
 

Attachments

  • cinebenchGL.jpg
    cinebenchGL.jpg
    982.3 KB · Views: 973
Last edited:
There is a point in Cinebench where CPU is the bottleneck. In a 3,1 with Dual Quad 3.2 ghz CPUs that point starts at the bottom.

A Gt640 is 100% the equal of a GTX Titan Black in Cinebench on that machine. I am NOT exaggerating.

On machines with faster CPU and RAM, the point of bottlenecking is higher. But once you hit that point all cards become equal with CPU being only differentiating factor.
 
Basic question so I can try for myself.

How do you select which GPU Cinebench tests?
I have two cards in my Mac Pro and it gives no option to test the second card.

or am I missing something basic here as to how the mac uses its second graphics card?
 
Basic question so I can try for myself.

How do you select which GPU Cinebench tests?
I have two cards in my Mac Pro and it gives no option to test the second card.

or am I missing something basic here as to how the mac uses its second graphics card?

It depends on the application actually.

Some apps see both cards, some don't, and some you have to go into a preference to enable it to run on both cards.

What I've also found is there is a performance hit for using a card for both rendering and displaying video, so I have a single slot GT120 to run the monitors with so both cards are free to do nothing but GPU compute.
 
Cinebench scores have always confused me. They only run on the graphics card you hooked up to what you designate as your primary display. I have a 7950 in slot 1, and a GTX 970 in slot 2. Each hooked up to a 4K monitor.



So for me my 7950 gets 63fps,



and switching to the GTX 970 I get 42fps.



This makes it appear that it's a driver issue, and the nMP get to use both cards together rather than just the one on the "Primary Display" we are using on the cMP. In the benchmark and in C4D I can't have both cards doing rendering together.

Can someone with a nMP post a pic of the OpenGL capabilities in the preference menu? Similar to the benchmark, whichever monitor is designated the primary one (menubar on it in display preferences) gets the GPU attached to it listed
 
Hey folks,
I recently upgraded my 12core2,93 GHz Mac Pro 5.1 to 12x3,46 GHz and my ATI Radeon HD 5870 raised its OGL-score from 62 to 67 fps...
peter

PS: with upgrading three Mac Pros I am now rendering at 3x12x3,46 GHz.... yay! :)
 

Attachments

  • Bildschirmfoto 2015-04-18 um 15.49.19.png
    Bildschirmfoto 2015-04-18 um 15.49.19.png
    94 KB · Views: 228
cinebench.jpg


The good old Radeon 5870 in my MacPro 5,1 6c gets 70-71 fps. Better than a GTX 980-970? I can't believe it.
I was going to buy such a Geforce but when I see this, I think I'll wait.
No web driver problems, no black screens, I am able to run 10.6.8-11.2...
 
Did you read the thread? Cinebench is completely meaningless.

The benchmark is heavily CPU bound, AMD drivers have less CPU overhead compared to Nvidia, so your 5870 will outperform every single Nvidia card you can buy, simple as that.

Have a look here: http://www.tonymacx86.com/graphics/177227-graphics-testing-benchmarking-chart-4.html#post1158102

Scores are perfectly proportional to CPU clock speed when OC'ing but doesn't change notably when upgrading GPU.


The thing you missed is that this completely irrelevant to other applications/games. A GTX 970 will easily outperform your 5870 in gaming or GPGPU applications, just not in that stupid benchmark.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.