D500 it's not the Pro. It's XT2 like D700.
Oh ok, my bad then. Thanks for the correction![]()
Ok good to know... A seriously handicapped version of the XT2.
D500 it's not the Pro. It's XT2 like D700.
Oh ok, my bad then. Thanks for the correction![]()
XT2 allowed Apple to go lower with voltage and TDP than Pro would. And it has simplified production process a bit.
Makes sense... What's your take on the D500... It doesn't seem to offer any added performance on the D300 in most benchmarks that have been run... Why would Apple have even bothered offering both the D500 and D300 given their near identical performance in most benchmarks?
Makes sense... What's your take on the D500... It doesn't seem to offer any added performance on the D300 in most benchmarks that have been run... Why would Apple have even bothered offering both the D500 and D300 given their near identical performance in most benchmarks?
Better performance with 4k video in FCP perhaps? Anand said that VRAM usage in 4k editing exceeded 3GB easily. IDK precisely what OS X apps take the advantage of double precision, but if any, it also could be a good reason.
Glad to help guys, the more we learn about these cards the easier it will be to decide on which to purchase.
For those asking if crossfire was enabled, it was enabled but it wasn't forced on in the driver, there is an option to just plain "enable crossfire" which I had ticked, then there is an option for "enable crossfire for programs without a application profile in the driver" which I didn't have ticked. I'm assuming that's to allow per application use of crossfire.
Anyway, I don't think many 3d creation cad apps even utilize crossfire but if you guys really want to see I can re run the bench mark with it forced on to see any difference.
I'm a freelance 3d artist, and I bought this machine as my work computer since it seemed like a good value for the 6 core which I use for rendering and the dual firepros. So far polygon performance and viewport performance I'm getting for high detail geo is a tad disappointing compared to my previous gtx 760 PC, on average it's exactly the same if not a few Fps slower (again no crossfire). Crossfire seems to make no difference for 3ds max or maya.
Overall though I'm happy with the machine and it's 6 core is awesomely fast, just a little pricy considering we now know they are ~w5000s let's hope the drivers are just poor and not getting the performance it should.
I ordered the 6 core D700 and use 3DSMax too, I have a gaming system that has an EVGA GTX 760 in it too so will be able to compare as far as that is concerned. I would make sure that you have OpenGL selected in 3dsmax in the viewport configuration and you should see a noticeable improvement with hardware lighting and viewport speed on heavy scenes. Well I hope so anyway haha.
just tried opengl in max 2014, really really slow
opengl in max has always been kinda odd, but side note, I DID notice at least 15fps improvment in maya over the gtx 760
yeah definitely huge increase in maya, im happy with that. It makes sense since maya is fully open GL, 3ds max is better at direct x so considering its a firepro card its pretty much to be expected.
I wonder if the slower performance of the nMP FirePros compared to AMD FirePros is cause of the lower clock speed on the nMP FirePros?
Hmm, if I'm not mistaken, a few years ago, Autodesk unified all the 3D programs (Inventor, Revit, Maya) into using a single unified 3D engine and instead of using OpenGL, it uses DirectX instead, so theoretically, if you use Autodesk 3D softwares, you wont see much difference (would probably see a performance decrease) with a workstation GPU.
oh possibly, im not sure actually. I do know in the settings in 3ds max its still showing direct x 11 with custom nitrous view, and im getting gains with the fire pro in maya vs an nvidia gaming card
Hmm, if I'm not mistaken, a few years ago, Autodesk unified all the 3D programs (Inventor, Revit, Maya) into using a single unified 3D engine and instead of using OpenGL, it uses DirectX instead, so theoretically, if you use Autodesk 3D softwares,... .
Better performance with 4k video in FCP perhaps? Anand said that VRAM usage in 4k editing exceeded 3GB easily. IDK precisely what OS X apps take the advantage of double precision, but if any, it also could be a good reason.
D500 it's not the Pro. It's XT2 like D700.
Moving to DirectX doesn't make much sense at all for multiplatform applications. The only reason to pick DirectX is that have zero plans to move the app to another platform.
http://usa.autodesk.com/products/mac-compatible-products
[ right column is the Windows, but runs in VM/BootCamp products. ]
Revit is Windows only but AutoCAD and Maya are multiplatform. Revit , 3ds , and Inventor all being on a veneer over DirectX would make alot more sense.
Perhaps they are doing entirely different frameworks for different platforms but that is alot of work (although some Cocoa / Win32 differences drive that anyway). And really isn't going to help with movement to the other OpenGL platforms like iOS / Android.
Hmm, wouldn't ECC enable will make the result slower? Also, isn't a workstation GPU similar to a gaming GPU except the biggest difference being the driver?Coupled with the nuked ECC these results do smell alot like mainstream configured GPUs being dressed up with FirePro drivers but the "FirePro" differentiators being turned off. ( Hence Apple and AMD slapping large mark-up on them. ) The GPU in a W5000 isn't that much better than the iMac max BTO option.
Odd that they would be using a die with 25% of the cores flipped off. That is alot of stuff had to fab that isn't doing anything. Although even more odd that it is 25% + 10 cores unit off. (1526 instead of 1536 ). Every 4 dies on that wafer is a whole die worth of "dead weight".
At the mark-up AMD is perhaps charging it probably isn't hurting them all that much. It may get Apple better pricing on the full XT2 models they getting too if all count to same volume discount.
Ok, I've just rechecked and they unified their material library (link), so it means that Revit, Inventor, 3Ds (not sure about Maya), must've shared the same 3D engine cause if they don't, I'm not sure how Autodesk manages interoperability between those softwares as they are very different in application. AutoCAD is primarily a 2D Based software (I know it can do 3D, but not like Revit) so porting it to Mac is possible, but even then AutoCAD Mac is missing some features which Autodesk themselves admit might not be ported ever as its complexity is off the chart.
I tried searching about the project Autodesk had in unifying their 3D engine but can't find it, what I did find is they are using Direct3D which is Windows only, basically it allows the user to use gaming GPU without having to use workstation GPU.
Check out this result and notice the difference between workstation and gaming GPU when it comes DirectX/Direct3D and OpenGL (Notice that both is Autodesk softwares)
Inventor DirectX/Direct3D test
Maya OpenGL test
As for CUDA and OpenCL, considering that Apple is pushing OpenCL to be adopted more into the market (and probably want it to be the dominant standard as it's open), sticking with AMD card when it comes to OpenCL is better.
Too bad for Apple, though, that the GPGPU market is mostly CUDA.
You are looking at "flipped off" from the wrong angle.
For a long time silicon vendors have been faced with the problem of what to do with chips that "mostly work" or "work at less than the expected clock".
If you make a 12-core chip, but a particular one only has 9 working cores - to you throw it away, or do you disable the 9th core and sell it as an 8-core? "Sell it as an 8-core" is the answer. (No difference if a GPU should have 1024 cores, but only 803 work - sell it as 768 core.)
Same thing happens with cache. If you try to make a chip with 30 MiB cache, but only 20 MiB is good - do you throw it away or sell it as an SKU for 20 MiB? "Sell it as a 20 MiB" is the answer.
Apple is probably getting a great price on GPUs that ATI would have thrown in the trash for being too slow or too few working cores.
Apple is probably getting a great price on GPUs that ATI would have thrown in the trash for being too slow or too few working cores.
That's how it usually works AFAIK. Next thing is, that XT2 has been replaced by XTL in all currently offered cards. AMD got rid of GPUs that are no longer on the market, Apple got a decent discount. Great deal for both.