Uni seems to be CPU bound, as you've all said.
Mine finished Furry in native res without issues. But I have seen on my own eyes how does GF110 power limiter work.
It has beaten your 8800 at least, Tesselator![]()
How do you powering your 570? I use non standard PCIe power cables (about 15 AWG).
Yeah, looks close enough. I guess folks can see what I mean about just being able to turn on more stuff but getting the same FPS and score just by yours and mine together.
I'm powering mine from the Motherboard's dual mini-6-pin connectors. But I'm kinda wondering if I should use an aux PSU anyway. I dunno if it's a bug in furmark 0.2.1 and 0.3.0 or not (I guess not) but at 1920x1080 with AA turned off I can only run the test for about 15 to 20 seconds and the system power shuts completely off with a click. I'm assuming it's an overdraw causing some overdraw protection relay to click and shut it all off. With AA turned ON it slows it down enough that it doesn't overdraw and the test completes fine. <shrug> Tiz a mystery...
Also, I'm not sure what a "GF110 power limiter" is.
![]()
Tesselator, I have posted about this elsewhere. I can shut off my 3,1 like a LIGHT by running furmark with 7970 or GTX580 or even GTX570 by running furmark. The PC GPU drivers were written to detect this app and keep this from happening, but OSX contains no such protection. My 4,1 can take much more punishment.
Would be interesting to get the paid version of Hardware Monitor and see EXACTLY how many watts it takes to engage the "Big Brother" feature.
Uni seems to be CPU bound, as you've all said.
How do you differentiate between a CPU bound or a PCI-e-edition boundary in this case?
Compare 4.0 scores between comparable machines:
Tesselator's 1,1 2.66 GHz (post #1), GermanyChris 1,1 2.33 GHz (post #17), me 1,1 3.0 GHz (this post).
As you can see, score increases quite linearly with CPU frequency increase.
Here is my 4.0 score:
Image
PCIe bandwidth matters too, no doubt, but comparison between similar machines is more adequate.
Ah, thanks for showing me what I already should have read from the imagesBut yes, for unigine the processor speed seems to matter for the 1,1 at least.
Ah, thanks for showing me what I already should have read from the imagesBut yes, for unigine the processor speed seems to matter for the 1,1 at least.
For what it's Worth.....
That was a relivation to me.....
I of course get what you're saying but then the Uni people should be slating this as a CPU test and not a GPU test. And actually in this case they should slate it as a "How crappy unbalanced code works on your computer" test.![]()
Tesselator..... You have a good Night...... =)
Hehe... Thx bro. Even though it's only 1:30 in the afterneen here.
BTW, I noticed your location... I used to live near Bushard and Atlanta. Peace!
OMG.....
I owned a house at Bushard and Adams.........
Now I'm at 8th & Walnut. By the pier. =) Peace +
Getting back to the benchmarking question, a couple of observations.
First, the Heaven benchmark is meant for Mountain Lion only under OS X. I don't know how much difference this might make but it could be very significant. If you're looking at the scores in Lion, then perhaps it would be more appropriate for everyone to run it under Windows?
Secondly, when I ran it under Windows, I noticed that the details in the scenes are far superior to those in the OS X. The cobbles and brickwork; the ropes, conduits and screws on the gun carriages are all an order of magnitude more complex. It makes comparisons across operating systems very misleading since the OS X benchmark would seem to be a comparatively easy task.
My Mac Pro is a 4,1/5,1 hybrid with a 3.33 GHz hex, 24 GB of RAM and a GTX 680.
Here is the result of the extreme preset (1600 x 900, windowed, 8 x AA) under 10.8.3:
Image
Under Windows 7 64-bit, using the same preset with the OpenGL renderer, I get this:
Image
And with the DX11 renderer, here's the result:
Image
I would guess that another reason for the drop in performance is that Windows 7 runs flashed cards at PCIe 1.1 speeds but it's apparent from the frame rates that the Windows version of the benchmark is giving the graphics card a much tougher job to do. If this is the case, then differences between the hardware that is feeding data to the cards should make less difference than the benchmarks under OS X.
Are any of you guys with 2,1 and 3,1 Mac Pros able to check this out under Windows?
Secondly, when I ran it under Windows, I noticed that the details in the scenes are far superior to those in the OS X. The cobbles and brickwork; the ropes, conduits and screws on the gun carriages are all an order of magnitude more complex. It makes comparisons across operating systems very misleading since the OS X benchmark would seem to be a comparatively easy task.
It's called "tessellation"Windows supports DX11 and OpenGL 4, which enables tessellation, while Mac OS X is stuck on OpenGL 3.2 which does not have tessellation shaders.
But yes, it does kind of invalidate comparisons between the two OSes, unless you force DX11/GL4 off on the Windows side.
Your images are not showing
cal6n said:I would guess that another reason for the drop in performance is that Windows 7 runs flashed cards at PCIe 1.1 speeds but it's apparent from the frame rates that the Windows version of the benchmark is giving the graphics card a much tougher job to do. If this is the case, then differences between the hardware that is feeding data to the cards should make less difference than the benchmarks under OS X.
Properly flashed cards run at 2.0 in Windows. It is unfleshed cards that run at the slower speedin Windows. And this only affect 3,1 and later Mac Pros