Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Three threads talk about it.

Make it four. The posts are scattered and interspersed, I'd love to see a debate between MacVidCards and VirtualRain. I'm in the 7950 camp, I have a hard time believing that Apple could sell the mid range/$4k computer with a 7870, FirePro or not.
 
There's no doubt that the D500 is a unique/custom part based on what's been published...

- It has a 384-bit memory bus which is the same as the R9 280 and the 7950.
- However it has 1526 cores (likely a typo for 1536) which is the same as a 7870XT (which has a 256-bit bus).
- And its rated TFlop performance is 2.2 compared to the other parts mentioned above which are 2.8 or 3.0. So it must be using a 15-20% lower core clock.

- BTW, it has zero in common with a W8000 (which has a 256-bit bus and 1792 cores).

Compared to the D300 it has 50% more memory, 50% more memory bandwidth, but only 10% more compute power.

It would seem to be poor bang-for-the-buck unless you know your GPU compute capabilities are memory bound. Otherwise, the D300s offer 90% of the performance for what seems like around $500 less.
 
Last edited:
Draw your own conclusions, especially as you are an engineer. You have more than enough data on this forum and Apple nMP tech specs page. Read some about drivers and device IDs.

I have a hard time believing that Apple could sell the mid range/$4k computer with a 7870, FirePro or not.

Start to getting use to that, because at launch of nMP it will be clear. So far all pointers are hidden in AMD drivers for 10.9. 7870XT in Mavs below. On the right you have that same card in 10.8.5. Do you see the difference?
 

Attachments

  • Screen Shot 2013-10-06 at 01.22.14.png
    Screen Shot 2013-10-06 at 01.22.14.png
    51.9 KB · Views: 150
  • Screen Shot 2013-10-06 at 08.16.13.png
    Screen Shot 2013-10-06 at 08.16.13.png
    52.1 KB · Views: 130
There's no doubt that the D500 is a unique/custom part based on what's been published...

Agreed, with the caveat that what they've done is take an existing part and simply modified it.

- It has a 384-bit memory bus which is the same as the R9 280 and the 7950.

Correct, and AFAIK changing the memory bus requires large changes in hardware and software. In other words, if they were going to modify an existing part the one thing you wouldn't change would be the memory bus it seems to me.

- However it has 1526 cores (likely a typo for 1536) which is the same as a 7870XT.

Correct, and again AFAIK changing the number of cores is simple and can be done in either hardware or software.

- And it's rated TFlop performance is 2.2 compared to the other parts mentioned above which are 2.8 or 3.0. So it must be using a15-20% lower core clock.

Correct, and we all know changing the clock is trivial and is usually done in software.

- BTW, it has zero in common with a W8000 and the 7870 is a direct match for the D300, not the D500.

Compared to the D300 it has 50% more memory, 50% more memory bandwidth, but only 10% more compute power.

So it would be a good choice for a more graphics intensive workload based on this

It would seem to be poor bang for the buck unless you know you your GPU compute capabilities are memory bound. Otherwise, the D300s offer 90% of the performance for what seems like around $500 less.

Interesting. GCN/Tahiti is a new architecture designed for good graphics AND compute. Pitcairn (D300) has worse compute performance. Regardless, where do you get the 10% performance figure? I'm not seeing that much detail on the Apple pages.
 
Interesting. GCN/Tahiti is a new architecture designed for good graphics AND compute. Pitcairn (D300) has worse compute performance. Regardless, where do you get the 10% performance figure? I'm not seeing that much detail on the Apple pages.

When I compiled my comparison chart I had 2 TFlops for the D300 and 2.2 TFlops for the D500 (a 10% difference). I assume Apple had published those stats somewhere at the time, because I didn't make them up.

EDIT: here's the source: http://www.apple.com/mac-pro/specs/

I agree that the D500 is likely based on a Tahiti Pro core with some cores disabled (and lower clocks). It makes the most sense.
 
Draw your own conclusions, especially as you are an engineer. You have more than enough data on this forum and Apple nMP tech specs page. Read some about drivers and device IDs.

I'm not an expert on GPU's which is why I ask.


So far all pointers are hidden in AMD drivers for 10.9. 7870XT in Mavs below. On the right you have that same card in 10.8.5. Do you see the difference?

Interesting, so the 7870 is GCN/Tahiti? I've been digging up what I can but from what I see it seemed like it wasn't.
 
There's 7870 (Pitcairn XT) and 7870XT (Tahiti LE - mutilated Tahiti PRO).
Add third memory controller to latter chip and you have your 384-bit 240GB/s D500.
 
I'm not an expert on GPU's which is why I ask.




Interesting, so the 7870 is GCN/Tahiti? I've been digging up what I can but from what I see it seemed like it wasn't.

There are two 7870 variants... The one marketed as the 7870XT is a Tahiti-LE part with 1536 cores (similar to the D500) but it only had a 256-bit bus. There was also a 7870 that was a Pitcairn-XT part. Confusing as hell.

Here's the full comparison chart which is all collected from Wikipedia and Apples published specs (except for my guesses on clocks in red)...

attachment.php
 
Perhaps, learning the virtues of a little patience and then reading reviews helps against this special kind of confusion. :D

Because I have nothing better to do than learn about GPU's while waiting patiently for Apple to release the computer. I've learned a lot in the past week, and the first week when the nMP is released I'll be better informed as to making a buying decision. Maybe I'll buy then, maybe not, but meanwhile I've learned enough that it's an option.

You did put a smiley there so I am not picking on you, but I don't get it when a person posts a thread on a forum, looking to learn and share information, and people immediately say "don't learn and share information". If you don't want to participate in the speculation just don't say anything IMO.
 
I agree that the D500 is likely based on a Tahiti Pro core with some cores disabled (and lower clocks). It makes the most sense.

It doesn't make a sense to disable some cores in perfectly working GPU.
But it makes to enable/add third memory controller in/to GPU which has some cores already disabled because it didn't pass quality control in the production process. Here you'll find how easily it can be done.
First one will always be more expensive than second one, even bought in large stock.
 
It doesn't make a sense to disable some cores in perfectly working GPU.
But it makes to enable/add third memory controller in/to GPU which has some cores already disabled because it didn't pass quality control in the production process.

You can't add functionality to silicon that's failed QC. Once the wafer is spun and cut... that's it. So the added memory controllers for a 384-bit wide bus were undoubtedly present before it was binned. And the only AMD architecture (pre R9) that has a 384-bit bus is Tahiti, thus it follows that the D500 is a bin of Tahiti with lower cores as opposed to a Pitcairn part with added memory controllers. However, I agree the lower core counts as you go down the product hierarchy may be based on yields and QC binning rather than intentionally disabling, although as I understand it, as yields increase, they can burn traces to disable cores in order to keep volumes up for low-end parts.
 
I mean Tahiti all the time. But LE not the PRO.
– LE is identified as D500 in current drivers. i.e. its device id is "tied" to D500 OpenGL and OpenCL engine
– only difference from Apple's specs is mem. bus width and bandwidth

Regarding memory controllers, I believe that they can be disabled/enabled via resistors i.e. the're not necessarily cut-off like cores are.
 
I mean Tahiti all the time. But LE not the PRO.
– LE is identified as D500 in current drivers. i.e. its device id is "tied" to D500 OpenGL and OpenCL engine
– only difference from Apple's specs is mem. bus width and bandwidth

Regarding memory controllers, I believe that they can be disabled/enabled via resistors i.e. the're not necessarily cut-off like cores are.

Ok, I see. Right.

At any rate, I think the question is, did adding that extra memory bandwidth open up some added performance for the D500 vs a regular LE? If the LE cores are starved, then maybe this bump in memory bandwidth will unlock some added performance.
 
OK, so check me on this ...

  • It appears to be a modified 7870XT - which is Tahiti
  • It has a wider bus than the 7870XT (more like the 7950)
  • It has more VRAM than the 7870XT (more like the 7950)
  • It has a slower clock than the 7870XT (more like the 7950)

So if this is true, whether it is a 7870 or a 7950 is immaterial it seems - it is a D500 (which is a 7870XT masquerading as a 7950). Regardless it has the latest architecture and upper level specs of the AMD cards.

If you go up to the D700 you're buying for extreme capabilities (and paying for it too probably). However going down to the D300 you're getting the older architecture.

This has been very educational, thanks.
 
Last edited:
At any rate, I think the question is, did adding that extra memory bandwidth open up some added performance for the D500 vs a regular LE?

Certainly, especially there where it mostly matters: huge resolutions and eyefinity-like display setups. As well as additional 1GB of RAM.
Less in GPGPU, I think. Code optimization gives here most spectacular results.
 
OK, so check me on this ...

  • It appears to be a modified 7870XT - which is Tahiti
  • It has a wider bus than the 7870XT (more like the 7950)
  • It has more VRAM than the 7870XT (more like the 7950)
  • It has a slower clock than the 7870XT (more like the 7950)

So if this is true, whether it is a 7870 or a 7950 is immaterial it seems - it is a D500 (which is a 7870XT masquerading as a 7950). Regardless it has the latest architecture and upper level specs of the AMD cards.

If you go up to the D700 you're buying for extreme capabilities (and paying for it too probably). However going down to the D300 you're getting the older architecture.

This has been very educational, thanks.

I would flip it around and say the D500 is based on a Tahiti Pro (7950) with some cores disabled, but it's largely semantics.

The only thing I'd add to this, is that it seems a higher end Pitcairn (D300) is better at pushing pixels than a low end Tahiti (D500) which was designed to be better at compute. You can see this in Tom's review of FirePro cards for gaming - keeping in mind that the W8000 is based on Tahiti Pro and W7000 on Pitcairn...(link). However, as 666sheep and I were discussing above, the added memory bandwidth of the D500 might rectify this shortcoming.

So if you're doing gaming or anything OpenGL related, the D300 might actually be pretty good bang-for-the-buck. However, on OpenCL, the D500 is definitely the better choice. And, of course, the D700 is going to be better than both at all tasks.
 
Last edited:
How fast is it?

MacVidCards believes it's a 7870XT (he references an earlier post where he demonstrates why but I don't have that link, anybody?)

Architosh believes it's a W8000. I'm unclear on what consumer GPU that is equivalent to, but they firmly state it is Tahiti while the 7870 above is Pitcairn I believe.

VirtualRain calls it out as a 7950 (chart link)


Which do you think it is?

A D700 might score an average of 19,357 on the Luxmark LuxBall HDR render and an average of 2,462 points on the Luxmark Sala render benchmark if its not significantly crippled by Apple's mods. See post # 913 here: https://forums.macrumors.com/threads/1333421/ .
 
I would flip it around and say the D500 is based on a Tahiti Pro (7950) with some cores disabled, but it's largely semantics.

The only thing I'd add to this, is that it seems a higher end Pitcairn (D300) is better at pushing pixels than a low end Tahiti (D500) which was designed to be better at compute. You can see this in Tom's review of FirePro cards for gaming - keeping in mind that the W8000 is based on Tahiti Pro and W7000 on Pitcairn...(link). However, as 666sheep and I were discussing above, the added memory bandwidth of the D500 might rectify this shortcoming.

So if you're doing gaming or anything OpenGL related, the D300 might actually be pretty good bang-for-the-buck. However, on OpenCL, the D500 is definitely the better choice. And, of course, the D700 is going to be better than both at all tasks.

Can we discount the W8000 & 7000 as they don't have 384 bit wide memory bus for the d500? I found this today on the Apple store Nigeria, which I haven't found on the UK store!

Any comments welcome!

The other link, is a comparison chart from the Lift gamma Gain site

http://www.apple.com/ng/mac-pro/specs/



http://www.liftgammagain.com/forum/index.php?threads/amd-fire-pro-d-what.1956/
 

Attachments

  • image.jpg
    image.jpg
    357.7 KB · Views: 194
I would flip it around and say the D500 is based on a Tahiti Pro (7950) with some cores disabled, but it's largely semantics.

Seems like it, however 666sheep posted screen grabs of Maverics identifying the 7870XT as the D500, which seems pretty conclusive.


The only thing I'd add to this, is that it seems a higher end Pitcairn (D300) is better at pushing pixels than a low end Tahiti (D500) which was designed to be better at compute. You can see this in Tom's review of FirePro cards for gaming - keeping in mind that the W8000 is based on Tahiti Pro and W7000 on Pitcairn...(link). However, as 666sheep and I were discussing above, the added memory bandwidth of the D500 might rectify this shortcoming.

So if you're doing gaming or anything OpenGL related, the D300 might actually be pretty good bang-for-the-buck. However, on OpenCL, the D500 is definitely the better choice. And, of course, the D700 is going to be better than both at all tasks.

Well this discussion makes my decision harder. Obviously I'd like to get away with a $3k base unit. OTOH, I wouldn't mind a machine that kicks butt in the few games I play too. Oh, and I'd like to start using OpenCL for some modeling software I'm writing ... I have no idea what to get. Should I get base, push up the CPU line, or push up the GPU line? Of course only I can answer that.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.