Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
These benchmarks are getting more, and more crazy.

Especially the one from the first post. One shows that this GPU is a beast, next one that its hardly an upgrade from last year.

Come on, give some clarity!

I think if you read the read of the thread, then the reasons for the result in the first test will become clearer. :)

Also, people tend to get too excited and forget what some of these benchmarks mean. For example, Geekbench results are now thrown around as the gospel and the final authority when it comes to an indicator of CPU and memory performance. This is not necessarily the case. Will a computer with 20,000 points be 2x faster in my workflows than one with 10,000 points? No. It just shows that in the particular operations that Geekbench is testing, that CPU managed to acquire twice as many points and last time I checked, how these points are calculated has never been explained by the creators.

----------

Over the years, I've come to look at Cinebench results as nothing more than a judge of which machine would perform better in C4D. And since I don't use C4D anymore, it doesn't tell me anything I can make a decision based on.

Very wise words, imo.
 
This Hackintosh does a decent enough job at Valley despite the OpenGL driver deficit:
 

Attachments

  • Valley HD.jpg
    Valley HD.jpg
    221.8 KB · Views: 153
First of all, Cinebench is an awful benchmark. Second, to truly assess the performance of the m295x, one needs to do series of proper benchmarks in Windows, under controlled conditions. I'd love one of the professional reviewers to do that, but they probably will not — the gaming crowd is not so much into Macs.
 
First of all, Cinebench is an awful benchmark. Second, to truly assess the performance of the m295x, one needs to do series of proper benchmarks in Windows, under controlled conditions. I'd love one of the professional reviewers to do that, but they probably will not — the gaming crowd is not so much into Macs.

Why would it HAVE to be Wndows? It runs OS X, so surely real world work tests in OS X applications is what's needed.
How does gaming have to do with it as well?

I'm sure the likes of Anandtech will do a great job again, they covered the Mac Pro in a lot of detail by getting an actual professional with a need for a workstation to review it.
 
Why would it HAVE to be Wndows? It runs OS X, so surely real world work tests in OS X applications is what's needed.
How does gaming have to do with it as well?

Before I answer these questions, I believe its important to make it clear what exactly one wants to know about these GPUs. If we are talking 'performance', 'benchmarks' etc., I am first and foremost thinking about raw capabilities of the GPU. However, it seems to me that many people in this thread are interested in a different question: 'will this GPU improve my workflow with application X?'. This is a very different question and does not necessarily say much about the actually GPU performance. Now to your post:

Firstly, OS X drivers are fairly immature and have more overhead compared to Windows drivers. If you want to measure a difference between the GPUs, it is important to know whether the results are representable, and having a bad driver can affect results in a number of ways.

Secondly, gaming is among the most demanding and complex workflows for a GPU, so gaming benchmarks are a natural way to properly test it. Unless you have other workflows that are really pushing the GPU (here, only scientific computing comes to my mind)

Thirdly, you say 'real world work tests', so I guess you mean utilising GPU for some computation workload. If its photo or video editing — I wouldn't bother. Just take the cheaper card. You will probably not notice any real-word difference. Benchmarks that I have seen show that a 2x faster card translates to merely 5% improvement in Photoshop.
 
Before I answer these questions, I believe its important to make it clear what exactly one wants to know about these GPUs. If we are talking 'performance', 'benchmarks' etc., I am first and foremost thinking about raw capabilities of the GPU. However, it seems to me that many people in this thread are interested in a different question: 'will this GPU improve my workflow with application X?'. This is a very different question and does not necessarily say much about the actually GPU performance. Now to your post:

That seems illogical to me. Just because a card is very good at gaming does not mean it's going to be better at all work flows.

Benchmarks that focus on Direct X mean nothing in OS X which uses OpenGL.
With OpenCL you can compare in either Windows or OS X though, but that negates your only Gaming is important tests as gaming doesn't focus on OpenCL.

Firstly, OS X drivers are fairly immature and have more overhead compared to Windows drivers. If you want to measure a difference between the GPUs, it is important to know whether the results are representable, and having a bad driver can affect results in a number of ways.

Do you have proof of that? Because OpenGL in Windows on the Mac Pro is slower than OpenGL in Unigine in OS X.
That indicates that OpenGL drivers in Windows are the ones that have more overhead and are immature.

VV6xBe4.png


Secondly, gaming is among the most demanding and complex workflows for a GPU, so gaming benchmarks are a natural way to properly test it. Unless you have other workflows that are really pushing the GPU (here, only scientific computing comes to my mind)

Gaming is the most intense? So 3D rendering, motion graphics, animation, CAD , and all that is easy and doesn't require powerful cards?
Here the GTX 780 Ti fails horribly against the AMD r 260x in OpenGL work flow. So does that mean the 260x is faster at gaming than the GTX 780 Ti?
No, because gaming and work flow do not go hand in hand, and have no bearing on each other. If Gaming was the most intense thing possible the GTX 780 Ti would be stomping over everything there, and not lose to the likes of an AMD 7790 in Maya 3D and Lightwave

http://www.tomshardware.com/reviews/geforce-gtx-780-ti-review-benchmarks,3663-12.html

Thirdly, you say 'real world work tests', so I guess you mean utilising GPU for some computation workload. If its photo or video editing — I wouldn't bother. Just take the cheaper card. You will probably not notice any real-word difference. Benchmarks that I have seen show that a 2x faster card translates to merely 5% improvement in Photoshop.

If that's true why does 3D work, CAD, and more need powerful graphics cards?

Care tell to Tutor and others they are wasting their money investing into powerful cards like the Titans for their work flows because there are cards that do better at gaming?

There are many user tests on this very forum showing how much powerful GPUs that excel at OpenCL speed up, and drastically reduce wait time during work in FCPX alone.

https://forums.macrumors.com/threads/1701775/

Yet still testing in Windows has absolutely no bearing at all on how the GPUs will perform in OS X, so it doesn't help at all.

That's why I and many prefer real world workflow tests to see the improvements over gaming ones. Especially for finding out if the GPU is the best choice for the given workflow. One does not fit all, despite how amazing that card will be at just gaming.
 
Last edited:
Because OpenGL in Windows on the Mac Pro is slower than OpenGL in Unigine in OS X.

On my Mac Pro unigine valley scores in OpenGL are nearly the same and the Windows score is usually slightly faster by a frame or two.
 
That seems illogical to me. Just because a card is very good at gaming does not mean it's going to be better at all work flows.

There is no such thing as 'gaming' or 'workflows'. There are different kinds of computations. Different gaming-related algorithms stress different capabilities of the GPU. What I meant by 'use gaming tests' in my previous post is that what you call 'workflows' often do not utilise the GPU to its full potential, so its difficult to use them to judge what the GPU can do.

Benchmarks that focus on Direct X mean nothing in OS X which uses OpenGL.

Sorry, but does not make any sense. Nowadays, the difference between OpenGL and Direct X is negligible. By properly utilising both APIs, you should get the same performance on the same hardware, unless the driver is doing something stupid.


Do you have proof of that? Because OpenGL in Windows on the Mac Pro is slower than OpenGL in Unigine in OS X.

This is really very interesting, thanks for this! It would indicate that AMD either wrote some very good drivers or something is fishy on the Windows side. With all my Macs, the performance on Windows was always around 20-30% better.


Gaming is the most intense? So 3D rendering, motion graphics, animation, CAD , and all that is easy and doesn't require powerful cards?

Well, this is tricky. First of all, many of those things have nothing to do with GPU performance. E.g. very few 3D rendering suites are actually utilising the GPU. CAD is a whole different story - the problem with it that it involves very complicated models and it often uses wireframe rendering. Alas, most CAD applications I am aware of use old, obsolete, unoptimised code. And GPU vendors tend to capitalise on that by creating special drivers that accelerate this code and sell them as 'professional GPUs'. For example, this is a good related story: http://doc-ok.org/?p=304



There are many user tests on this very forum showing how much powerful GPUs that excel at OpenCL speed up, and drastically reduce wait time during work in FCPX alone.

https://forums.macrumors.com/threads/1701775/

I can't make much sense of that thread quickly, the hardware configurations are very mixed. To do proper statistics one would need to have it in a tidy spreadsheet.

Yet still testing in Windows has absolutely no bearing at all on how the GPUs will perform in OS X, so it doesn't help at all.

That is true of course (see also the first paragraph of my previous post)
 
Cinebench is a joke

That is the 100th time I have posted that

It tells you how fast your machine runs Cinebench, that's it.

A GPU tests that scales linearly with CPU speed isn't relying much on GPU.
 
Sorry, but does not make any sense. Nowadays, the difference between OpenGL and Direct X is negligible. By properly utilising both APIs, you should get the same performance on the same hardware, unless the driver is doing something stupid.

Yes I know OpenGL 4.4 and Direct X 11.2 are very similar, that's not what I've been denying.
I'm denying that testing in Windows with DirectX which the majority of games are, is pointless for finding out if a GPU suits a user's intended work, and use.

If it was negligible please explain the Tomshardware OpenGL results.
We all know how powerful the GTX 780 Ti is, yet in OpenGL for Maya 3D and Lightworks its horribly slow, along with the Titan.

It all just reaffirm that just because a GPU is good at gaming does not mean it's good at everything else. Yes gaming pushes GPUs hard, but it doesn't translate to overall performance in everything.

So far according to you, if a GPU is godly at Gaming, it's the best fit.
yet if it does poorly in professional apps, it's the app's fault, or it's drivers.

Instead of simply acknowledging that to find the best GPU for the job once needs to have it tested for your specific needs.

This is really very interesting, thanks for this! It would indicate that AMD either wrote some very good drivers or something is fishy on the Windows side. With all my Macs, the performance on Windows was always around 20-30% better.

Also in regards to this, you can simply check Unigine in Linux as well. OpenGL in Linux is also faster than OpenGL in Windows.

We all know that GPU drivers are not as polished in Linux as it is on windows.

Simply looking at "raw capabilities" through synthetic benchmarks are meaningless in the end for getting anything done. They're an indicator, but do not dictate how a GPU will perform.

For instance the GTX 780 Ti is in a league of it's own against the likes of the R9 260x in 3D mark Firestrike. Although as seen is horribly slow compared to it in Maya 3D.
This means that someone that bought the GTX card is now losing out if they work primarily in Maya.

It's why I firmly believe that real world tests are always superior.
 
Sorry, but does not make any sense. Nowadays, the difference between OpenGL and Direct X is negligible. By properly utilising both APIs, you should get the same performance on the same hardware, unless the driver is doing something stupid.
Last time I checked OpenGL was heavily neglected in the Windows world and Direct X has been pushed heavily by MS. Has the situation changed recently? On page 1 I have posted a chart showing the difference in unigine between Direct X and OpenGL in windows, but that could be related to the benchmark itself.
 
On my Mac Pro unigine valley scores in OpenGL are nearly the same and the Windows score is usually slightly faster by a frame or two.

I wanted to see how slow my old GTX285/2.67Ghz 4C MP would stack up and.. it froze my system and I had to do a hard reset. I'm going to assume it's not up for it.
 
Ref, the discussion above on OpenGL in OS X v. Win, it's pretty well known that AMD tends to run better in OS X while Nvidia's strong suit is in Windows. It's all about the drivers. The gap may be closing, but in many apps, especially single thread high demand OpenGL games, it's still an issue. And there again, you've got to usually factor in the CPU due to the nature of OpenGL. A great example of this is in X-Plane 10, supported by extensive standardized benchmarking of this primarily single-thraded OpenGL app.
 
Cinebench is a joke

That is the 100th time I have posted that

It tells you how fast your machine runs Cinebench, that's it.

A GPU tests that scales linearly with CPU speed isn't relying much on GPU.

Finally! Something we can agree on.
 
Last time I checked OpenGL was heavily neglected in the Windows world and Direct X has been pushed heavily by MS. Has the situation changed recently? On page 1 I have posted a chart showing the difference in unigine between Direct X and OpenGL in windows, but that could be related to the benchmark itself.

In general that's 100% the case. Direct X gets a lot more attention, especially in gaming still.

It's not until recently that 3D engines like Unity, and Unreal 4 started embracing OpenGL again, even CryEngine 3 is being ported to OpenGL for the game Kingdom Come Deliverance.

Feature set wise OpenGL 4.4 is very fast, and comparable to DX 11.2, but it's not really used or catered for much, let alone optimised.

A simple example is the idTech 5 engine that uses OpenGL 3.2. Every time a game with it is released AMD is in dire straights due to their drivers compared to NVIDIA.

NVIDIA also does very well in OpenGL in OS X, usually better than AMD.

My old 3.33Ghz 6 core 2010 Mac Pro with GTX 660 with 6 3.5Ghz and D700.

iFSYL9Wo3pq1P.png




Finally! Something we can agree on.

What make that so interesting for the m295x in Cinebench is that the GTX 780 Ti had a 4770K clocked at 4.5Ghz, compared to the iMac's 4.0Ghz.

If Cinebench scales linearly at CPU speed, the 780 Ti should be more than 7fps faster.

It's very curious, but as I stated before we need a lot more information, and some real world tests to see how it compares.
 
Last edited:
In general that's 100% the case. Direct X gets a lot more attention, especially in gaming still.

It's not until recently that 3D engines like Unity, and Unreal 4 started embracing OpenGL again, even CryEngine 3 is being ported to OpenGL for the game Kingdom Come Deliverance.

Feature set wise OpenGL 4.4 is very fast, and comparable to DX 11.2, but it's not really used or catered for much, let alone optimised.

A simple example is the idTech 5 engine that uses OpenGL 3.2. Every time a game with it is released AMD is in dire straights due to their drivers compared to NVIDIA.

NVIDIA also does very well in OpenGL in OS X, usually better than AMD.

My old 3.33Ghz 6 core 2010 Mac Pro with GTX 660 with 6 3.5Ghz and D700.

Image





What make that so interesting for the m295x in Cinebench is that the GTX 780 Ti had a 4770K clocked at 4.5Ghz, compared to the iMac's 4.0Ghz.

If Cinebench scales linearly at CPU speed, the 780 Ti should be more than 7fps faster.

It's very curious, but as I stated before we need a lot more information, and some real world tests to see how it compares.

4790K turbos to 4.4 GHz. and it's not always perfectly linear, but often it seems pretty close.
 
4790K turbos to 4.4 GHz. and it's not always perfectly linear, but often it seems pretty close.

Ah good to know thank you. I take it it'll only turbo a single Core? I suspect the iMac's thermals might not be happy with all 4 cores at 4.4Ghz and the GPU being used.
 
The comparison of 4770K vs 4790K CPUs is very close in performance:
http://cpuboss.com/cpus/Intel-Core-i7-4770K-vs-Intel-4790K

This quote is probably the best indicator on where the development of the Devil's Canyon CPU wins most:
"When we set the Core i7-4790K to the same 3.5GHz base/3.9GHz Turbo clock speeds as the Intel Core i7-4770K, it ran a full 15 degrees cooler—50 degrees Celsius, compared with 65 degrees Celsius for the Intel Core i7-4770K."

As power efficiency increases and temps drop, the newer CPU gives additional performance due to reduction in thermal creep and slowdown due to rising temps. This makes the 4790K a better choice for an iMac with restricted air flow, compared to a bigger box.

Conversely overclocking the 4770K in a bigger box will work just as efficiently as the 4970k, provided the cooling solution can bring the temps down to the same levels of creep and slowdown.
 
Last edited:
If it was negligible please explain the Tomshardware OpenGL results.
We all know how powerful the GTX 780 Ti is, yet in OpenGL for Maya 3D and Lightworks its horribly slow, along with the Titan.

It all just reaffirm that just because a GPU is good at gaming does not mean it's good at everything else.

I gave you an explanation for this below. The OpenGL code those apps are using is awfully outdated and does not work well with modern cards. Proof: AutoCAD has been rewritten with modern rendering code using DirectX, and the 'gaming' cards perform better with it than 'professional' ones (http://www.tomshardware.com/reviews/best-workstation-graphics-card,3493-5.html). The picture is radically different when you look at software that still uses legacy OpenGL renderers.

Now, that difference has nothing to do with OpenGL or DirectX per se. AutoCAD has boosted the performance on gaming cards not just because of using DirectX, but using DirectX rendering which makes sense with modern GPUs. CAD software has been historically written using OpenGL code that — as mentioned before — has been obsolete since a decade. In best case, they wold be using vertex arrays (although immediate mode functions are more likely). To utilise a modern card, you need to feed it with data via GPU-mapped memory buffers and employ specific rendering techniques. Most CAD programs are outdated in that regard. Why are 'professional' GPUs so good at CAD though? Because they have drivers that can optimise that code. Not to mention that the 'normal' drivers are also often artificially slowing down many CAD-related algorithms.

So far according to you, if a GPU is godly at Gaming, it's the best fit.
yet if it does poorly in professional apps, it's the app's fault, or it's drivers.

Exactly. If a GPU can fluently draw an insanely complicated gaming scene consisting of millions of triangles and hundred megabytes of textures and using shaders that surpass the complexity of many Photoshop filters, then it should be able to manage a CAD workflow.

Of course, there are professional workflows (scientific computation) which have the most benefit from a certain specific area. For instance, the dedicated GPUs of modern MBP have a lot of memory bandwidth — so they will be better suited for tasks that copy data more then process it (such as editing of large video or filter application to large images). But the integrated Intel GPU has more computation ability, but it sucks on memory bandwidth — that is why the iGPU will best the Nvidia one on tasks which involve complicated computations (many complex OpenCL algorithms).

Instead of simply acknowledging that to find the best GPU for the job once needs to have it tested for your specific needs.

That goes without saying of course. But you don't need a super expensive rip-off professional GPU to do CAD, as AutoCAD example impressively shows.
 
Benchmarks are one thing, but will the m295x give the same performance after being pushed really hard playing the game/app for a long time as the 6,1?

I have my doubts that the 2012 chassis and cooling system its based on won't start to drop off fairly quickly with real life use whereas the 6,1 simply doesn't know the meaning of the word throttling.
 
Exactly. If a GPU can fluently draw an insanely complicated gaming scene consisting of millions of triangles and hundred megabytes of textures and using shaders that surpass the complexity of many Photoshop filters, then it should be able to manage a CAD workflow.


That goes without saying of course. But you don't need a super expensive rip-off professional GPU to do CAD, as AutoCAD example impressively shows.

I'm sorry but you're wrong. The GTX 780 Ti is a very good gaming card, but in FCPX it's barely faster than the iMacs m295x.

Since FCPX use OpenCL, and the GTX 780 Ti gets a score of 1738 in Luxmark which translate into FCPX performance, while the m295X gets 1656.
http://www.hardwareluxx.com/index.p...test-nvidia-geforce-gtx-780-ti-.html?start=11
luxmark.jpg



http://barefeats.com/imac5k.html
SIdXWaW.png


So according to your End all and Be all Direct X Gaming tests, the GTX 780 Ti would be leagues ahead of the m295X for video editing, or any OpenCL based workflow such as Mari from TheFoundry as well.

You cannot simply rely on gaming as the most important test. It means very little in the way the card will perform for a given work flow.
The simple fact that the real workflow tests from Maya and Lightwave has shown is that gaming tests mean very little.

Summarised very well in the tests I linked you previously and you even quoted me back.

Even more so that no gaming test what so ever means anything to OS X based applications, since none of them use Direct X.

No where have I ever once stated that you need a "super expensive rip-off professional GPU".

Although since you only care about the best gaming card, that's somewhat what a person will get if they don't look at Real Workflow tests which I have been stating from the start are what matters. Especially since one card does not fit all.

The way I see it at the moment in regards to the new 5K iMac is that a person that pays $2750 gets a 5K IGZO Colour accurate display, with a computer and the m295x that can match the a High end gaming card in Video Editing, and work in Mari. Even if it's not very good at gaming.

Even so I look forward to in-depth reviews testing workflows for specific applications in OS X. Similar to what Anandtech did by getting an actual professional to test the Mac Pro in professional applications.
 
Last edited:
I'm sorry but you're wrong. The GTX 780 Ti is a very good gaming card, but in FCPX it's barely faster than the iMacs m295x.

I find it a bit difficult discussing with you when you just cherry-pick parts of my post and ignore the others. Please, if you already go through the trouble of quoting my posts, comment on the whole of the post and not on some out-of context sentences.

First, I explicitly mentioned that particular workflows can be one-sided and thus cater to that or other GPU architecture.
Second, the 780 Ti performs barely better than m295x in FCPX is a) because FCPX does not utilise the GPU to its full potential (which I have also mentioned before) and maybe also partly b) because Nvidia Fermi is not that good at heavy computation workflows — the b) is also clearly seen in LuxMark. I can assure you that one can easily write an OpenCL program in which the 780 Ti will wipe the floor with m295x.

So according to your End all and Be all Direct X Gaming tests, the GTX 780 Ti would be leagues ahead of the m295X for video editing, or any OpenCL based workflow such as Mari from TheFoundry as well.

No, I was talking about CAD workflows only (again, see my post). Scientific computation — or any other nontrivial usage of the GPU as parallel processing computer — is more difficult to access, because GPUs with different architectures have different performance characteristics in different areas (which I have also explicitly said before).

Not to mention that my initial point still holds: there is a BIG difference in 'performance' (what can the GPU do and how good it is at it?) and 'performance' (how will the GPU perform in application X?). You seem very set into lumping these things together.

----------

P.S.

The simple fact that the real workflow tests from Maya and Lightwave has shown is that gaming tests mean very little.

Yes, because — and I must have repeated it at least 3 times by now — Maya is incompetent at using the GPU! There is nothing wrong with the GPU, all that is wrong is with Maya! If I write you a program that will artificially throttle itself on Intel CPUs and thrive on AMD CPUs, will you criticise Intel CPUs or will you criticise me for writing crap software? :rolleyes:
 
I find it a bit difficult discussing with you when you just cherry-pick parts of my post and ignore the others. Please, if you already go through the trouble of quoting my posts, comment on the whole of the post and not on some out-of context sentences.

You yourself only quote certain parts, I still responded to you entire post as a whole
First, I explicitly mentioned that particular workflows can be one-sided and thus cater to that or other GPU architecture.
Second, the 780 Ti performs barely better than m295x in FCPX is a) because FCPX does not utilise the GPU to its full potential (which I have also mentioned before) and maybe also partly b) because Nvidia Fermi is not that good at heavy computation workflows — the b) is also clearly seen in LuxMark. I can assure you that one can easily write an OpenCL program in which the 780 Ti will wipe the floor with m295x.

You're confusing Kepler which the GTX 6xx and 7 series are, with Firmi GTX 4xx, and 5xx.
Can you please show me an OpenCL application on where the 780 Ti wipes the floor with a similar performing OpenCL AMD card?
It would be very interesting to see.

No, I was talking about CAD workflows only (again, see my post). Scientific computation — or any other nontrivial usage of the GPU as parallel processing computer — is more difficult to access, because GPUs with different architectures have different performance characteristics in different areas (which I have also explicitly said before).

Yes I saw your post, and as I pointed out prior to that the same 780 Ti, and even the Titan were horrible at Maya and Lightworks. Whether or not the Software is incompetently made means little, as it just means those GPUs are not the right ones for the job. I never stated that something is then wrong with them.

I completely understand the architectural, and even software differences. It's why I am adamant that a gaming test where a card performs exceptionally well at does not mean that same card is suitable for a different work flow.

Not to mention that my initial point still holds: there is a BIG difference in 'performance' (what can the GPU do and how good it is at it?) and 'performance' (how will the GPU perform in application X?). You seem very set into lumping these things together.

I lump them together because they're one and the same.
'performance' (what can the GPU do and how good it is at it?)

What a GPU can do is relative to what you do with it, which
'performance' (how will the GPU perform in application X?)

If a GPU is very good at Mari, which is an industry leading 3D painting application, it shows that 'That' GPU is good at 3d Painting.

If a GPU is particularly good at Direct X gaming it has no bearing on if it'll be good at using MARI. The same applies in reverse as well.

It's why since the beginning I've been adamant that testing should focus on that method, rather than running Direct X gaming benchmarks and simply drawing from that that X GPU, is better than Y GPU. As it's not always the case, and varies from application.

You've already stated the same
GPUs with different architectures have different performance characteristics in different areas

Why is it so hard for you to accept that testing for specific use cases is better overall than simply relying on Gaming. Not just gaming, but gaming within Windows and using Direct X, which has no bearing on OS X applications since Direct X does not run on the Operating System.

The iMac which this entire thread is about is a prime example of that. It's very good at FCPX, Compressor, Motion, Mari, and the Entire Adobe Suite which uses OpenCL in OS X.

I do know that the m295x doesn't hold a candle to a card like the 780 Ti in gaming, but just because the GTX is significantly better at gaming does not mean it's significantly better in all applications or uses.
 
Last edited:
Why is it so hard for you to accept that testing for specific use cases is better overall than simply relying on Gaming. Not just gaming, but gaming within Windows and using Direct X, which has no bearing on OS X applications since Direct X does not run on the Operating System.

It is not at all hard for me — after all, I have made the exactly the same point ;)

At any rate, it seems that we have a classical misunderstanding here. I thought we were discussing the performance of the GPU, while you seem only interested on its viability in certain applications (which - again is perfectly fine). In that context, I don't have anything to contribute to this thread. I still believe that these are very different things though. Your perspective is clearly the more practical one — its about choosing a tool for a particular work. Mine is a more academic one — its about discussing the architecture, efficiency and performance for a range of algorithms /under different load factors — not really about idiosyncratic behaviour of certain software suites which has little to do with the hardware itself (like Maya).
 
It is not at all hard for me — after all, I have made the exactly the same point ;)

At any rate, it seems that we have a classical misunderstanding here. I thought we were discussing the performance of the GPU, while you seem only interested on its viability in certain applications (which - again is perfectly fine). In that context, I don't have anything to contribute to this thread. I still believe that these are very different things though. Your perspective is clearly the more practical one — its about choosing a tool for a particular work. Mine is a more academic one — its about discussing the architecture, efficiency and performance for a range of algorithms /under different load factors — not really about idiosyncratic behaviour of certain software suites which has little to do with the hardware itself (like Maya).

Ah it seems we do have that. Excuse me.

In regards to the architecture, the m295x uses AMD's new Tonga, while the m290x is still older Tahiti.

With that in mind the m295x is not far off performance wise from the Mac Pro's Tahiti based D700. It does make me wonder how much better a Tonga base D700 would have been.

In either case we'll need to wait for thorough reviews to appear and put the systems through their paces. Hopefully we both get our answers.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.