Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

N19h7m4r3

macrumors 65816
Original poster
Dec 15, 2012
1,192
8
It seems the new Tonga base m295x could be a little power house.

Danny Winget ran Cinebench on his new 5K iMac with i7 4790K and m295x and got a Cinebench score of 105FPS in OS X.

mfReOdU.png


For reference here's my D700's in OS X 10.9.2, and Windows 8.1 in Cinebench.
i9y0OppkywuHN.png


His Mac Pro with D500's gets only
lz5rGiU.png


His video of it.

I find that very impressing to say the least, and wonder how the m295x is going to do in other tests once proper in-depth reviews start coming out.
 
Last edited:
Unigine valley ExtremeHD, done on different platforms (unfortunately I don't have the figures for Windows for the m295x yet)
 

Attachments

  • ungine fps comparison.png
    ungine fps comparison.png
    350.9 KB · Views: 448
  • ungine score comparison.png
    ungine score comparison.png
    355.8 KB · Views: 315
Last time I checked, CB in Open GL is a CPU limited benchmark and you've got to account for that too. The 4790k is at the top of the heap in single thread performance, and again, it has been my understanding that applies to CB running OpenGL.
 
Last time I checked, CB in Open GL is a CPU limited benchmark and you've got to account for that too. The 4790k is at the top of the heap in single thread performance, and again, it has been my understanding that applies to CB running OpenGL.

Well the 4790K loses to my 3.5Ghz 6 Core xeon in Cinebench, while my D700 loses in FPS to the m295x in it.
 
Well the 4790K loses to my 3.5Ghz 6 Core xeon in Cinebench, while my D700 loses in FPS to the m295x in it.

You're now talking about two different tests. The CPU test is multithreaded and so the Mac Pro should and does beat the 4790k. The OpenGL, on the other hand, apparently only uses a single thread and the GPU.

The 4790 will beat your Mac Pro Xeon in single threaded stuff and will lose in multi-threaded stuff when it comes to CPU tasks.
 
You're now talking about two different tests. The CPU test is multithreaded and so the Mac Pro should and does beat the 4790k. The OpenGL, on the other hand, apparently only uses a single thread and the GPU. Makes sense?

Ah okay, I'm just quoting the scores from the tests. Nothing else.
 
Ah okay, I'm just quoting the scores from the tests. Nothing else.

I am not 100% sure how the OpenGL test in Cinebench works and I have never bothered to find out, so I am trusting xav8tor, but what he said makes perfect sense.

From what I've seen so far, the M295X has very similar performance to a single D700 in OS X.
 
I am not 100% sure how the OpenGL test in Cinebench works and I have never bothered to find out, so I am trusting xav8tor, but what he said makes perfect sense.

From what I've seen so far, the M295X has very similar performance to a single D700 in OS X.

Well I just ran Cinebench 4 times, and did reboot after the first attempt because i only got 46fps.

This is the best result out of 4 runs. Yosemite has dropped my performance.
During the Beta I got improved OpenGL performance, especially in games like Hitman Absolution and Tomb Raider where my minimum FPS more than doubled.

Something odd going on here. Yosemite 10.10 was also cleanly installed and not upgraded to from Mavericks or the Betas.

royHatR.png
 
Well I just ran Cinebench 4 times, and did reboot after the first attempt because i only got 46fps.

This is the best result out of 4 runs. Yosemite has dropped my performance.
During the Beta I got improved OpenGL performance, especially in games like Hitman Absolution and Tomb Raider where my minimum FPS more than doubled.

Something odd going on here. Yosemite 10.10 was also cleanly installed and not upgraded to from Mavericks or the Betas.

Image

You're right, regarding Cinebench and something odd, because I have the exact same nMP as you and I get similar results in Yosemite. However, looking at my Unigine scores across different platforms, I am not sure that Yosemite is the culprit. It would be good though if we could find someone with a nMP with the same config and still on Mavericks. Then we could compare the Unigine Valley ExtremeHD scores between Yosemite and Mavericks. I've never really trusted Cinebench as a reliable benchmark.
 
You're right, regarding Cinebench and something odd, because I have the exact same nMP as you and I get similar results in Yosemite. However, looking at my Unigine scores across different platforms, I am not sure that Yosemite is the culprit. It would be good though if we could find someone with a nMP with the same config and still on Mavericks. Then we could compare the Unigine Valley ExtremeHD scores between Yosemite and Mavericks. I've never really trusted Cinebench as a reliable benchmark.

Hopefully there is someone, although the difference in FPS nearly 30 is alarming to say the least. As it stands the m295X nearly matches a GTX 780 Ti in cinebench with just a 7fps difference.

We might still have to wait for more in-depth reviews from the likes of Anandtech as well.

From tonymacs
http://www.tonymacx86.com/user-buil...igabyte-gtx-780-ti-windforce-oc-32gb-ram.html
92580d1399837012-success-red-pro-haswell-matx-ga-z87mx-d3h-i7-4770k-gigabyte-gtx-780-ti-windforce-oc-32gb-ram-cinebench.jpg
 
As it stands the m295X nearly matches a GTX 780 Ti in cinebench with just a 7fps difference.

In my mind, the 780ti with a 4770k result proves what xav8tor was saying and the 4790k in the riMac is inflating the OpenGL score in Cinebench to ludicrous levels. There is no way that the m295X can match a 780ti.

Look at the chart I made and posted above. The m295X is just a little bit faster than a 780M, so for it to be nearly as fast as a 780ti simply makes no sense, considering what we know about the specs of a 780ti and the m295X, unless we take xav8tor's explanation into account.
 
In my mind, the 780ti with a 4770k result proves what xav8tor was saying and the 4790k in the riMac is inflating the OpenGL score in Cinebench to ludicrous levels. There is no way that the m295X can match a 780ti.

Look at the chart I made and posted above. The m295X is just a little bit faster than a 780M, so for it to be nearly as fast as a 780ti simply makes no sense, considering what we know about the specs of a 780ti and the m295X, unless we take xav8tor's explanation into account.

Yes, we all need more information. What's going on with it's score, and why is the D700 suddenly doing much worse.

I'm going to be downloading Unigine to see what I get there as well.

EDIT: Looking at barefeets the m295x is damn good card for gaming in OS X at 2560x1440.

Nearly double the FPS in Tomb Raider which uses OpenGL 4.0. Just a tad faster than the 780m in Diablo which is OGL 3.2

It's certainly faster at OpenCL anyway.
 

Attachments

  • Screen Shot 2014-10-22 at 15.58.22.png
    Screen Shot 2014-10-22 at 15.58.22.png
    49.2 KB · Views: 856
  • Screen Shot 2014-10-22 at 16.02.48.png
    Screen Shot 2014-10-22 at 16.02.48.png
    48 KB · Views: 209
Last edited:
Unigine valley ExtremeHD, done on different platforms (unfortunately I don't have the figures for Windows for the m295x yet)


Over 2 thousand points in unigine valley on GTX Titan on MacPro 2008 here.:rolleyes:
 
Yes, we all need more information. What's going on with it's score, and why is the D700 suddenly doing much worse.

I'm going to be downloading Unigine to see what I get there as well.

EDIT: Looking at barefeets the m295x is damn good card for gaming in OS X at 2560x1440.

Nearly double the FPS in Tomb Raider which uses OpenGL 4.0. Just a tad faster than the 780m in Diablo which is OGL 3.2

It's certainly faster at OpenCL anyway.

Here again, you've got to look into each app and determine the relative importance of cores/threads used, CPU gen and clock speed, GPU drivers, workstation or consumer/gaming GPU, OpenCL, OpenGL, DirectX, and so on. Until you do that, it's an exercise in frustration.

One thing is for sure, in most games, they simply will not run as well on a workstation (i.e., Xeon/FirePro) as they will on a gaming rig/consumer desktop, and Windows is a benefit to many. Right now, a 4790k paired with a GTX 980 is going to stomp just about everything else out there in terms of gaming, and in more than a few pro apps not yet optimally multithreaded (where possible). Overclock that sucker and you've really got a beast except for things like HD/4K rendering. Even there, it's not too shabby.
 
Over 2 thousand points in unigine valley on GTX Titan on MacPro 2008 here.:rolleyes:

The points are useless, what are the minimum, average and max FPS.
The overall points make no sense as there the D700's have better Minimum FPS, and the same Max FPS but the GTX 780 Ti get better points.

Since the D700's score essentially double the minimum FPS, and the same on the Max. The Point difference makes no sense, nor does the average FPS since the 780 Ti hits a lower fps.


d1iZ80O.png


dbQVQpw.png
 
Over the years, I've come to look at Cinebench results as nothing more than a judge of which machine would perform better in C4D. And since I don't use C4D anymore, it doesn't tell me anything I can make a decision based on.
 
Hope this isn't too off topic, but is there any correlation here to the Desktop 290X?

More specifically, can we assume that 10.10 drivers that support the m295X will support a desktop R9 290X desktop version in a cMP?

Thanks in advance.

Edit:

Just saw on netkas people are working on it. Seems like R9 290X works fine, still no boot screens or PCIe 2.0 as of 22 October
 
Last edited:
The points are useless, what are the minimum, average and max FPS.
The overall points make no sense as there the D700's have better Minimum FPS, and the same Max FPS but the GTX 780 Ti get better points.

Since the D700's score essentially double the minimum FPS, and the same on the Max. The Point difference makes no sense, nor does the average FPS since the 780 Ti hits a lower fps.


Image

Image

Repeat after me: Workstation computers and GPU's are not made for gaming...
 
The points are useless, what are the minimum, average and max FPS.
The overall points make no sense as there the D700's have better Minimum FPS, and the same Max FPS but the GTX 780 Ti get better points.

Since the D700's score essentially double the minimum FPS, and the same on the Max. The Point difference makes no sense, nor does the average FPS since the 780 Ti hits a lower fps.


Image

Image

You've just posted unigine heaven and we were talking about valley :)

Also, the preset must be extremeHD and not a custom one.
 
The points are useless, what are the minimum, average and max FPS.
The overall points make no sense as there the D700's have better Minimum FPS, and the same Max FPS but the GTX 780 Ti get better points.

Since the D700's score essentially double the minimum FPS, and the same on the Max. The Point difference makes no sense, nor does the average FPS since the 780 Ti hits a lower fps.

Wrong.

Min fps could be hit just once durring testrun due to some driver's issue or some external (for vga) issue. like cpu been utilized a lot by another process (like disk indexing) for few seconds or it was scene loading, anything. It's just one moment needed to break minfps result... Max fps is just the miminal time to render some random frame (maybe it was clear sky with no objects), very useless. No review ever compares max fps...

Average fps is best indicator for benchmarks, and points actually represent that.
 
Just saw on netkas people are working on it. Seems like R9 290X works fine, still no boot screens or PCIe 2.0 as of 22 October

Thank you for sharing this! It gives me hope!

I recently upgraded to Yosemite and a MSI Lightening 290X 4GB in a Cubix Xpander connected to my 2010 Mac Pro. I've been getting worse performance than with my old PowerColor 280X 3GB. About 25% less performance to be more accurate, in Adobe Premiere Pro and Media Encoder. I was wondering why until now. I hope the savvy sleuths over at Netkas can discover a way to unlock PCIe 2.0
 
Barefeats tests are a bit unfair GPU-wise. 295 should compete with D700, D300 is more like M290X. I assume that CPU was the priority.
 
These benchmarks are getting more, and more crazy.

Especially the one from the first post. One shows that this GPU is a beast, next one that its hardly an upgrade from last year.

Come on, give some clarity!
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.