Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You're probably on to something there. D500 and D700 specs are close to the Sky 500 & 700. The Sky 900 on the other hand is a dual GPU card, so the D900 isn't really comparable.

http://www.anandtech.com/show/6867/amd-announces-radeon-sky-family-of-servercloud-video-cards

HAHA, I just want to say very similar way of naming, and the Radeon Sky announce at March, 2013. AMD like this tune this year:D

In fact, Radeon Sky Series and FirePro S Series are also variants of Tahiti.
 
Last edited:
2. FirePro D series is just the rename and further customised edition of Tahiti.

I think the probability of Situation 2 is high, according to the #32, at June, 2013, the D700 have already had benchmark data and the performance of D series is all behind W series and cut too much.

This IMO. Especially when D500 has 1526 stream processors - there's no R9 series card with this processor count. But in 7 series there's Tahiti LE (aka 7870XT) with such specs. Add to this fact, that Tahiti LE based cards in 10.9 are recognized by GLView and Luxmark as D500 and 7970 (Tahiti XT) as D700.
 
There will be no Crossfire on the Mac Pro under OS X.

However, if Apple decided to go that route, they already have the underpinnings to add it without CrossFire. An app can manually implement Crossfire right now if it chooses to. No need for Mantle.

In fact, under MS Windows, the CF run two modes, one is enable for each program that support CF, and the other mode is global open which let all program (and the system GUI) by multiple GPU rendering.

Well, why OSX doesn't support CF? Are there some information about this argument?
 
Last edited:
There will be no Crossfire on the Mac Pro under OS X.

However, if Apple decided to go that route, they already have the underpinnings to add it without CrossFire. An app can manually implement Crossfire right now if it chooses to. No need for Mantle.

What the crap?? Aren't Mantle and crossfire two totally separate technologies? Mantle is just a way to access a deeper level of API on certain new ATI GPUs. What does that have to do with crossfire??

edit: and as a side note: Will OpenGL even benefit from Mantle? I thought Mantle was just for gaming.
 
What the crap?? Aren't Mantle and crossfire two totally separate technologies? Mantle is just a way to access a deeper level of API on certain new ATI GPUs. What does that have to do with crossfire??

Mantle is a lower level API with less abstraction (than OpenGL or Direct3D). They are different by they may intersect. If the Crossfire GPU presents as a single GPU at the Mantle level then the two will intersect. At least very least, Mantle would have to be "aware" of what to do in Crossfire mode.

I'm not sure how much the low level XBox API that Mantle appears to be a derivative of tries to put a thin layer on top of heterogeneous computations to ( getting jobs allocated to GPGPUs cores if that makes sense ). All the next gen consoles are heterogenous oriented APUs.

I'd be surprised if Mantle makes it to OS X. It appears to usurp API territory area that Apple has staked out as they solely control.


edit: and as a side note: Will OpenGL even benefit from Mantle? I thought Mantle was just for gaming.

OpenGL is effectively a competing API to Mantle. OpenGL isn't going to particularly benefit at all beyond general sharing of knowledge about what doesn't work (bugs , glitches , etc). Like Crossfire though they will intersect where OpenGL and Mantle both feed data/commands into the GPU pipelines.
 
I thought this was pretty useful info for all the people asking how good the nMP will be at playing games.

Workstation GPU's and Gaming GPU's have very different rendering behaviors - which is the result of the drivers they are using.

A "good" workstation GPU is generally an "avg" gaming card. W9000 is a pretty decent gaming card just because it is a very powerful card. Its able to compensate for the inefficiency of how it renders your game.

A gaming card redraws the entire screen dozens of times a second. The screen is just one big dump of pixels. Something happens in the corner of the screen, the whole screen refreshes. This is great for games which are chaotic and generally have things happening all over the screen often at once. It is not an efficient way to deal with heavy tasks. One small thing happens and the whole screen redraws/recalculates.

A workstation card works in the opposite way. It breaks the screen into many small layers/pieces and only redraws the parts that are changing. A game results in hundreds or thousands of layers that are a lot for the workstation card to manage. Things are changing all over the place. This is great for a heavy dataset. So say I have four 3d view ports, I rotate the model in one of them. Only that one viewport redraws. So all of the horsepower in my workstation card is focused on updating just a small percentage of pixels.

This is why gaming cards with their gaming drivers are good for games, and meh for CAD/3d etc. And Why workstation cards have more Vram and are not as good for games.
 
This is why gaming cards with their gaming drivers are good for games, and meh for CAD/3d etc. And Why workstation cards have more Vram and are not as good for games.

The 7970 performs very well against the W9000 at rendering tasks.

As MacVidCards points out, it's mostly just drivers that make the difference, and in Mac OS, they use the same drivers.

Even in windows, two 7970's, when used together properly, will perform far better than the W9000 at less than 1/5 the price. Three (~$930) 7970's is still 1/3 the price.

I can't seem to find any benchmarks pitting multiple 7970's against the FirePro cards, but I'd imagine it's something AMD doesn't want people to notice.
 
The 7970 performs very well against the W9000 at rendering tasks.

As MacVidCards points out, it's mostly just drivers that make the difference, and in Mac OS, they use the same drivers.

Even in windows, two 7970's, when used together properly, will perform far better than the W9000 at less than 1/5 the price. Three (~$930) 7970's is still 1/3 the price.

I can't seem to find any benchmarks pitting multiple 7970's against the FirePro cards, but I'd imagine it's something AMD doesn't want people to notice.

Well, I think you'd better to compare the SPECviewperf score not game performance. It is different from the past, workstation graphic card and desktop graphic card have a great gap, especially the professional application, like CAD/CAM. If you just comparison game performance, it is not necessary to pay the high price.
This is your mistake, not only drivers, but also deeper different.
 
Well, I think you'd better to compare the SPECviewperf score not game performance. It is different from the past, workstation graphic card and desktop graphic card have a great gap, especially the professional application, like CAD/CAM. If you just comparison game performance, it is not necessary to pay the high price.
This is your mistake, not only drivers, but also deeper different.

You only read the title of the link I posted. The benchmarks on that page were rendering benchmarks.

Here are more featuring the 7970 (No comparison to the W9000 there though).. as you can see, it does quite well
 
You only read the title of the link I posted. The benchmarks on that page were rendering benchmarks.

Here are more featuring the 7970 (No comparison to the W9000 there though).. as you can see, it does quite well

Well, let me show you the W8000's Specviewperf 11 Score.

Please check this picture.
http://www.chiphell.com/data/attachment/forum/201301/13/19291955r2ej2j9o2vju0v.jpg
from this post:http://www.chiphell.com/thread-656966-1-1.html

AND you link, I pick the data from
http://www.tomshardware.com/reviews/geforce-gtx-titan-opencl-cuda-workstation,3474-6.html

In order to compare, I list the data below(All the setting is 1920*1080 NoAA):

W8000 is 22.63 /// 7970GE is 12.34 [1.83x] ||| CATIA(catia-03)
W8000 is 51.20 /// 7970GE is 67.27 [0.76x] ||| EnSight (ensight-04)
W8000 is 71.36 /// 7970GE is 38.84 [1.84x] ||| Lightwave (light01)
W8000 is 75.96 /// 7970GE is 18.14 [4.19x] ||| Maya (maya-03)
W8000 is 6.340 /// 7970GE is 3.900 [1.63x] ||| Pro/ENGINEER (proe-05)
W8000 is 64.46 /// 7970GE is 35.96 [1.79x] ||| SolidWorks (sw-03)
W8000 is 23.22 /// 7970GE is 8.800 [2.64x] ||| Siemens Teamcenter Visualization Mockup (tvcis-02)
W8000 is 45.13 /// 7970GE is 18.91 [2.39x] ||| Siemens NX (snx-01)

From the link that named How Well Do Workstation Graphics Cards Play Games? We knows that W8000 is weaker than W9000(sometimes W8000 behind W7000).

By the way, you can get my score(single W7000 1920*1080 no AA) from this post:http://www.chiphell.com/thread-876879-3-1.html AT #81

In fact, in some project, 7970GE's score is all behind W5000.

I think the result is obviously, 7970GE is no way to replace FirePro W Series.
AND rendering benchmarks cannot reflect the performance of professional application.

Cheers,

Shawn
 
Last edited:
I've been busy all week, and just now trying to catch up on current events.

So, is there any certainty about what these GPU's are equivalent to?

Are they rebranded Sky-Series as someone suggested above? If so, is there any consensus on the desktop equivalent to these cards?
 
Are they rebranded Sky-Series as someone suggested above?

The naming strategy is similar to Sky series. The specs don't line up at all along the numbers.

Sky 500 --> Apple D300
Sky 700 in same ball park as Apple D500
Sky 900 complete non match to Apple D700 ( first is a 2x GPU card. The latter only has one GPU ).


In that Sky isn't focused output to external card edge connectors I think that is primarily similarity. The virtual hosting / cloud graphcs stuff is not necessary for a workstation with a user sitting right in front of it. This is a custom card that pump out video output, but internally to another part of the computer as opposed to a standard edge connector.

This is all quite simile when Apple rebranded the PowerPC processors as G3 , G4 , G5 when IBM/Motorola had other names for them. There isn't some master plan in the naming other than to make it Apple unique.
 
I think the result is obviously, 7970GE is no way to replace FirePro W Series.
AND rendering benchmarks cannot reflect the performance of professional application.

Cheers,

Shawn

Very interesting. The drivers for workstation cards are clearly better suited towards professional tasks in Windows--about 200% better frame-rates. I stand corrected.

I wonder about the Mac benchmarks, now that we know they're the same drivers.
 
Very interesting. The drivers for workstation cards are clearly better suited towards professional tasks in Windows--about 200% better frame-rates. I stand corrected.

I wonder about the Mac benchmarks, now that we know they're the same drivers.

Emm, I still want to say, for now the professional card, the dirrerent is not only driver differences.

The recognition error is because they have the same core, but that does not mean that they are the same thing.
For example, as we all know, the W7000 and the 7850 have the same core, but after you scaning the index of 7850 production, you will find that 7850 have no single slot edition.

Times have changed, and pro card too. This is not the time that we tweak driver to let us game card obtain professional card's performance. Not only AMD, but also NVIDIA.

If this method(mod driver) is feasible, on the platform of WINDOWS should have the same case broke, but the reality is not.
 
Is it fair to say that

"the trashcan Mac Pros are recycling decent Radeon cards, calling them FirePro with previously unseen model names, boosting the price a lot, and putting them in an Apple-proprietary form factor so that upgrades are out of the question"​

Am I reading the information wrong?
 
Is it fair to say that

"the trashcan Mac Pros are recycling decent Radeon cards, calling them FirePro with previously unseen model names, boosting the price a lot, and putting them in an Apple-proprietary form factor so that upgrades are out of the question"​

Am I reading the information wrong?

MacVidCards has another thread talking about this in detail, but my understanding (based on what I've read here and elsewhere) is this:

For the most part, workstation graphics in general are just slightly modified retail (Radeon / GeForce) cards with ECC and extremely picky windows drivers.

So basically you're probably right on the money, but this is the same as in the retail PC world.

The question that MacVidCards answers is "Is there any difference between Radeon and FirePro in OS X" and the answer appears to be, apart from ECC, no. They use the same drivers, so it seems highly unlikely professional apps would behave differently.

Another question is: If you boot into Windows, will they function like PC FirePro workstation cards in apps like Maya and Lightwave. I would think (and I'm sure MacVidCards would agree) that there's no reason to think they wouldn't--though I would guess it would be at a lower clock.

As far as upgradability, it's unclear if it's possible to upgrade these cards yet. My understanding is that they may unscrew with a phillips and they may not be soldered in, but everyone agrees that once you get the card out, there will likely not be any options for replacing it--that is, until Apple deigns to release one in a newer model of Mac Pro. IMO people who are banking on the upgradability of the GPU are totally nuts, but the truth is we don't know for 100% certainty yet.
 
VERY disingenuous.
Even more than Nvidia's Quadro cards, FirePro means "regular card marked up by 300-400% for no discernible performance increase".
Maybe from a comparison of the crib-sheets, but there are other factors that don't really get mentioned much. First of all, professional level cards, unlike gaming cards, need to be capable of being pushed to their limit and kept there for a lot longer than a gaming card. If I buy a new Mac Pro I would expect to be able to process 4k video 24/7 for the full three year AppleCare period without pause.

While I'm not expecting to do that (I barely do any work with regular HD video) it's the difference between a professional computer and a high-end gaming rig; a gaming rig might reach the same performance levels some of the time, a professional computer should be expected to be able to do it for at least a full working day, every business day of the week at a minimum. That's not to say you're not still risking a failure, even professional parts will fail now and then, or suffered a manufacturing defect that isn't immediately apparent etc. etc., and you should always have backups and such. But if I were to run a gaming rig like I might run a Mac Pro then I would expect to the former to wear out a lot faster.

So yeah, the huge markup may still be a bit on the unreasonable side, but considering we can't just remove and replace cards in the new Mac Pro, the lifetime of those cards under load is more important than ever.


Where are people getting their comparisons for the D300's though, is it purely from looking at existing tech-specs and comparing? In the video about Mac Pro production it looks like Apple is assembling the cards themselves, so it could be an almost fully custom design. I'm hoping they're erring towards better GPUs and providing less VRAM to keep heat manageable though, plus the VRAM is presumably ECC (otherwise what's the point of using ECC RAM for the system), so memory comparison isn't going to be the most useful as Apple could easily have changed what they're adding in order to balance their own concept of D300, D500 etc.
 
a gaming rig might reach the same performance levels some of the time, a professional computer should be expected to be able to do it for at least a full working day, every business day of the week at a minimum.

I've read this a few times. What are the differences between gaming cards and professional cards that allow the Pro cards to last under these conditions?
 
I've read this a few times. What are the differences between gaming cards and professional cards that allow the Pro cards to last under these conditions?

Besides drivers, it's usually higher specced parts used to build it, ECC high performance RAM, and they come with professional warranties and support.

It's usually the support and warranty that allows them to be charged so much.
I agree it's silly, but no one is doing anything about it since they need the support and cards that the companies charge so much for.
 
Besides drivers, it's usually higher specced parts used to build it, ECC high performance RAM, and they come with professional warranties and support.

It's usually the support and warranty that allows them to be charged so much.
I agree it's silly, but no one is doing anything about it since they need the support and cards that the companies charge so much for.

I get and acknowledge the ECC RAM, but what other parts are higher spec? Many high-quality low-cost consumer cards have excellent components. Can you give examples (with sources) of the higher quality components (capacitors, maybe?)? Not to nitpick, but I've seen this information repeated elsewhere with no definitive information.

As far as warranty, this inexpensive 7970 has a lifetime warranty where as this FirePro W9000 is only 3 years. As far as support, most GPU makers support their products for many years, and many pro apps list many consumer cards on the approved hardware list.
 
Last edited:
I don't believe OSX supports crossfire. Professionals use the cards independently, so I doubt you'd want them for gaming.

Well...that's not for sure. A Dreamworks guy in a presentation that took place after WWDC said that the dual cards of nMP can be used combined, something that "till now was available only to games". That's about the phrase he used.
 
Well...that's not for sure. A Dreamworks guy in a presentation that took place after WWDC said that the dual cards of nMP can be used combined, something that "till now was available only to games". That's about the phrase he used.

There's no reason to think Apple would make crossfire drivers as even multi-GPU professional apps don't require it.
 
I get and acknowledge the ECC RAM, but what other parts are higher spec? Many high-quality low-cost consumer cards have excellent components. Can you give examples (with sources) of the higher quality components (capacitors, maybe?)? Not to nitpick, but I've seen this information repeated elsewhere with no definitive information.

As far as warranty, this inexpensive 7970 has a lifetime warranty where as this FirePro W9000 is only 3 years. As far as support, most GPU makers support their products for many years, and many pro apps list many consumer cards on the approved hardware list.

Nah I get what you're saying, I don't have any proof is just what AMD and NVIDIA have been spouting. Like I said it's all just to mark up the stuff, besides support.

Also the USA sure is nice, there's no such thing as Lifetime warranty in the EU, and from experience XFX has the worst warranty of any AMD card out there. So bad OCUK( which is one of the largest e-tailers here) have dropped them as a supplier because of their lack of honouring proper RMA procedures, and sometimes having skipped by using cheap capacitors and such.

While with FirePro you're dealing directly with AMD, and because it's a Workstation card will go out of there way to sort you out, and get you up and running again. It's part of the massive cost.

Here's a list of the tech/features on the AMD FirePro cards though:

- Error correcting code (ECC) Memory, Helps ensure the accuracy of your computations by correcting single or double bit errors as a result of naturally occurring background radiation.

Optimizations and certifications for many major CAD/CAE and Media & Entertainment applications

GeometryBoost, Allows the GPU to process geometry data at a rate of twice per clock cycle resulting in a doubling in the rate of primitive and vertex processing.

Triangle rates increase two-fold relative to a GPU that does not possess GeometryBoost.

Framelock/Genlock, Facilitates synchronization to external sources (Genlock) or synchronizes 3D rendering across multiple GPUs in different systems (Framelock).

Partially Resident Textures (PRT), PRT can utilize absolutely enormous texture files, up to 32 terabytes large, with minimal performance impact.

Video Codec Engine (VCE), A multi-stream hardware H.264 HD encoder, for power efficient and quick video encoding
 
Last edited:
Some insights here...

http://architosh.com/2013/10/the-mac-pro-so-whats-a-d300-d500-and-d700-anyway-we-have-answers/

D700

So here’s the easiest part. The D700 has exact spec matches to the FirePro W9000, AMD’s highest performing workstation class GPU. Notable specs for both the D700 & W9000 include:

2048 Stream Processors (texture mapping units/unified shaders)
384-bit memory bus width
264 GB/s memory bandwidth
6 GB Vram GDDR5
The W9000 specs at 4 teraflops single precision while Apple has the D700 at 3.5 teraflops. This could be because of a frequency delta for the core clock. On the W9000 it specs at 975 MHz core clock speed. Perhaps on the D700 it runs at 900? D700 is a Tahiti-based GPU.

Amazon has the FirePro W9000 at 3,158.USD. Apple is providing two D700′s in all Mac Pro configurations as options. That’s over six grand in GPU costs alone.

D500

The D500 is a bit harder. What we know for certain is it too is Tahiti-based. Although it has a 384 bit bus and more memory bandwidth than AMD’s W8000–the next best ranked FirePro available–it has 2.2 teraflops of performance compared to the W8000′s 3.2 teraflops.

Because this is also a Tahiti-based unit it could be based on the W8000 or the W9000, though its memory bus is matched to the W9000. We just don’t know? Here are the key specs:

1525 Stream Processors (texture mapping units/unified shaders)
384-bit memory bus width
240 GB/s memory bandwidth
3 GB Vram GDDR5
The D500 on a teraflop rank basis is just slightly more powerful than the Pitcairn-based W7000, the third best ranked FirePro card. But as we said above it is a Tahiti-based GPU. And in terms of streaming processors it ranks closer to the W8000 than the W7000.

The D500′s memory bus is not 256-bit like the W8000 but it ships with 1 GB less VRAM. Apple seems to have cleverly balanced various performance metrics to create an upper-mid level workstation class option while decreasing streaming processors (1792 down to 1525), decreasing VRAM (4 GB down to 3 GB) and possibly frequency on the core clock to result in a slightly less teraflop-performing mid-level option. The result is a reasonable price increase over the D300 for this Tahiti-based D500.

Amazon has the W8000 retailing at 1,308.USD. Apple is providing roughly two of these or just about 2,500.USD of GPU value. But because of the strategic decreases as noted above the real value Apple is providing may be much closer to just under 2 grand. (see below or next page)

D300

The D300 is a Pitcairn-based GPU, just like the AMD FirePro W7000. Of the three GPU options on the new Mac Pro only the D300 features a narrower 256-bit memory bus with a 160 GB/s memory bandwidth.

We feel that the D300 is in essence the W7000 but with half the VRAM. The actual W7000 comes with 4 GB not 2 GB like the D300. Essential key specs include:

1280 Stream Processors (texture mapping units/unified shaders)
256-bit memory bus width
160 GB/s memory bandwidth
2 GB Vram GDDR5
The D300 provides 2 teraflops of single precision compared to 2.4 for the W7000. Could this again be because of core clock differences? Has AMD provided D-series FirePros with slightly dialed down clocks for thermal temperature reasons?

Amazon has FirePro W7000 cards retailing at 649.USD. Apple is providing essentially two of these or approximately 1,200.USD in GPU value in the baseline Mac Pro.

Importantly, although the D300 is Pitcairn-based while the other two are Tahiti based, both are apart of the 28-nm architecture series and both are Graphics Core Next (GCN) Architecture.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.