Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

peabo

macrumors regular
Original poster
Feb 14, 2008
201
0
I wasn't able to find if this has been discussed elsewhere, but 3DMark now recognises the D700s in the nMP as "AMD Radeon R9 280X". Just until recently, it has listed them as 'Unknown'.

http://www.3dmark.com/3dm/2038247
 
Last edited:
Side note: Do we have those tests for the D300 and D500 yet? So much scattered stuff here... :p
 
I wasn't able to find if this has been discussed elsewhere, but 3DMark now recognises the D700s in the nMP as "AMD Radeon R9 280X". Just until recently, it has listen them as 'Unknown'.

http://www.3dmark.com/3dm/2038247

Interesting, seriously people should just run specviewperf benchmark test and the whole questions regarding if the FirePros Dx00 are indeed real workstation GPUs or not will be answered. :rolleyes:
 
Interesting, seriously people should just run specviewperf benchmark test and the whole questions regarding if the FirePros Dx00 are indeed real workstation GPUs or not will be answered. :rolleyes:
one more time, the chips in consumer and workstation cards are essentially the same. The workstation part may have a slightly different firmware, but it's mostly in the drivers.
 
I'm not trying to suggest that this is any kind of proof that they are not FirePro cards, but it's interesting, regardless. It may suggest that the cards are using whatever new tech AMD introduced that distinguishes a 280X from a 7970.
 
one more time, the chips in consumer and workstation cards are essentially the same. The workstation part may have a slightly different firmware, but it's mostly in the drivers.

Yeah, I've stated it before as well that the biggest difference will be in the drivers and if it's called FirePro without a FirePro driver, then it's technically not a FirePro GPU and instead are just Radeon GPUs with the name FirePro on it.

Like I said, a specviewperf benchmark test will prove the validity of the driver as the result will be very obvious
 
one more time, the chips in consumer and workstation cards are essentially the same. The workstation part may have a slightly different firmware, but it's mostly in the drivers.

I thought the main difference was in the floating point calculations at the hardware level. On consumer cards this is disabled? as in soft modding GTX cards to act as their counterpart Quadro's never resulted in quite the same performance when tested under apps like 3dsMax.
 
Yes the D500 is a 7870 XT.

This is the problem, how things are "reported" means just that... reported. Is there some deep down buried line in the firmware that actually says 7870XT or is the software doing the reporting looking up on a table what the chip looks like? I say this because the D500 is an interesting test case in that no 7870XT I know of ever had a 384-bit bus like the D500 has, they have 256-bit buses. Any chip can have its clocks lowered or raised etc. but the chip and architecture are the same - the memory bus indicates that it may be based on the same Tahiti LE core but the bus would indicate the silicon isn't the exact same so it kinda muddies up the water a bit.
 
I thought the main difference was in the floating point calculations at the hardware level. On consumer cards this is disabled? as in soft modding GTX cards to act as their counterpart Quadro's never resulted in quite the same performance when tested under apps like 3dsMax.

Actually, after a little research it is indeed down to things like Double Precision calculations. The cards all start out the same (as in workstation grade) but under testing if they discover faults, and if these faults can be ignored for consumer level cards then they can lock them out via software driver or even soldering the circuit board (I assume to stop soft modding). These cards end up as consumer cards or go in the trash can (no pun intended).
 
AMD doesn't hobble double-precision floating point on its consumer cards. The difference at that point mostly is down to drivers and the level of testing/support you can expect from application vendors.
 
AMD doesn't hobble double-precision floating point on its consumer cards. The difference at that point mostly is down to drivers and the level of testing/support you can expect from application vendors.

Ah ok, I must admit it was all Nvidia GTX/Quadro that I found these articles on.

----------

AMD doesn't hobble double-precision floating point on its consumer cards. The difference at that point mostly is down to drivers and the level of testing/support you can expect from application vendors.

Read this:

http://icrontic.com/article/the-real-difference-between-workstation-and-desktop-gpus
 
[G5]Hydra;18605512 said:
This is the problem, how things are "reported" means just that... reported. Is there some deep down buried line in the firmware that actually says 7870XT or is the software doing the reporting looking up on a table what the chip looks like?

It's reported as 7870XT in some Win programs because it uses Tahiti LE device ID.
Firmware says it's actually Tahiti XT2 (with disabled cores ofc). Tahiti B0 XT2 precisely. Same chip is used in 7970 GE and first batch of R9 280x. D700 is also B0 XT2 but with all cores active. I wonder whether cores in D500 are laser cut or only disabled in BIOS...
 
It's reported as 7870XT in some Win programs because it uses Tahiti LE device ID.
Firmware says it's actually Tahiti XT2 (with disabled cores ofc). Tahiti B0 XT2 precisely. Same chip is used in 7970 GE and first batch of R9 280x. D700 is also B0 XT2 but with all cores active. I wonder whether cores in D500 are laser cut or only disabled in BIOS...
Wow that is an interesting idea. I'm not sure if anyone would buy a new D500 equipped Mac simply to try to mod the GPUs into D700s. But it certainly opens up possibilities for people to hack these things a few years down the road when they will be cheap on the used market.
I usually wait a year or two until I attempt any "serious" upgrades to my Macs.
 
It's reported as 7870XT in some Win programs because it uses Tahiti LE device ID.
Firmware says it's actually Tahiti XT2 (with disabled cores ofc). Tahiti B0 XT2 precisely. Same chip is used in 7970 GE and first batch of R9 280x. D700 is also B0 XT2 but with all cores active. I wonder whether cores in D500 are laser cut or only disabled in BIOS...

Interesting thanks for clarifying... and given this part seems unique to Apple, perhaps it's more likely the cores are simply disabled in BIOS.
 
I'm not sure if anyone would buy a new D500 equipped Mac simply to try to mod the GPUs into D700s.

It's does not seem that simple. I didn't see EEPROM on any picture of nMP GPU daughtercards. VBIOSes and EFI drivers seem to be now loaded from nMP's EFI.
 
Wow that is an interesting idea. I'm not sure if anyone would buy a new D500 equipped Mac simply to try to mod the GPUs into D700s. But it certainly opens up possibilities for people to hack these things a few years down the road when they will be cheap on the used market.
I usually wait a year or two until I attempt any "serious" upgrades to my Macs.

Note that the D500 and D700 have the same TDP despite the performance advantages of the latter. Depending on how binning is being handled, it's conceivable that even if the D500 parts could be modded to enable additional cores, they'd run too hot.
 
Note that the D500 and D700 have the same TDP despite the performance advantages of the latter. Depending on how binning is being handled, it's conceivable that even if the D500 parts could be modded to enable additional cores, they'd run too hot.

Yup either Binning or salvaging partly defective D700's. Most likely both.
 
[G5]Hydra;18605512 said:
This is the problem, how things are "reported" means just that... reported. Is there some deep down buried line in the firmware that actually says 7870XT or is the software doing the reporting looking up on a table what the chip looks like? I say this because the D500 is an interesting test case in that no 7870XT I know of ever had a 384-bit bus like the D500 has, they have 256-bit buses. Any chip can have its clocks lowered or raised etc. but the chip and architecture are the same - the memory bus indicates that it may be based on the same Tahiti LE core but the bus would indicate the silicon isn't the exact same so it kinda muddies up the water a bit.

Most likely 3d mark is just looking up the chip based on base clock speed and architechture. It's all the same gpu core though the rest of the card is what makes fire pros special.
 
cMP with a couple Titans STOMPS nMP all maxed out

I wasn't able to find if this has been discussed elsewhere, but 3DMark now recognises the D700s in the nMP as "AMD Radeon R9 280X". Just until recently, it has listed them as 'Unknown'.

http://www.3dmark.com/3dm/2038247

Here is a 12 Core nMP:

http://www.3dmark.com/fs/1627960

And here is the Champion, a 12 Core cMP:

http://www.3dmark.com/3dm/2415477

No, I don't think for a second that this is the be-all, end-all test.

There are certainly PLENTY of tests where a nMP can beat mine.

But in the PC World, this test is important. And for the "Firestrike" part, the cMP is 40% faster. The other two tests are a draw.

And it declares that a 12 Core nMP D700 is faster than "93% of PCs" while my ancient dinosaur of a 12 Core somehow beat 97%. Hmmm.

But it certainly shows that despite a much slower PCIE bus and MUCH slower SSD, the "Classic" Mac Pro can be upgraded GPU-wise to equal or exceed the new one. And I don't even have 5690s yet.

And the cMP can still be upgraded further (780Ti as soon as 10.9.2 & finished Web Driver drop) whereas new one, well it's kind of done being upgraded GPU wise. Certainly for the next 6-18 months anyway.

And I guarantee that I have far less then $10K in this cMP. The nMP 12 core on the other hand....

Let's just say that you could get yourself a cMP with Dual 5680s and Dual Titans and a Power Supply....and get yourself "ThunderBolt" by adding a rMBP with the money left over compared to a nMP 12 Core.
 

Attachments

  • Screen Shot 2014-02-10 at 4.48.32 PM.png
    Screen Shot 2014-02-10 at 4.48.32 PM.png
    1.2 MB · Views: 177
  • Screen Shot 2014-02-10 at 4.48.53 PM.png
    Screen Shot 2014-02-10 at 4.48.53 PM.png
    303.4 KB · Views: 154
  • Screen Shot 2014-02-10 at 4.50.13 PM.png
    Screen Shot 2014-02-10 at 4.50.13 PM.png
    1.2 MB · Views: 183
  • Screen Shot 2014-02-10 at 4.50.24 PM.png
    Screen Shot 2014-02-10 at 4.50.24 PM.png
    313.9 KB · Views: 149
And it declares that a 12 Core nMP D700 is faster than "93% of PCs" while my ancient dinosaur of a 12 Core somehow beat 97%. Hmmm.

I would think if I sold video cards I would want there to be empirical proof that you could simply upgrade an "ancient dinosaur" with new gfx cards to bring it in line with newer hardware's performance ;) That being said the fact that dual Titans beats 97% of the machines tested should be of little surprise, maybe you are knee deep in expensive GPU's but most people never touch a machine with dual GPU's in them let alone one with two top tier GPU's in them that cost a pretty penny each.
 
And I guarantee that I have far less then $10K in this cMP. The nMP 12 core on the other hand....

Let's just say that you could get yourself a cMP with Dual 5680s and Dual Titans and a Power Supply....and get yourself "ThunderBolt" by adding a rMBP with the money left over compared to a nMP 12 Core.

12 Core with D700 is 7600$. Not 10K. So you have to buy a Classic Mac Pro with dual processors, 4K, then add 2K costing processors and another 2K costing GPU's, another power supply for a total of 8K+, and yes, you do get better GPU performance compared to the current Mac Pro. The CPU performance will be around the same, memory performance will be much lower and no USB 3.0, no Thunderbolt and it costs more. Just to get better GPU performance for games and CUDA apps, is it really worth it?

I'm not even considering the 256 GB SSD you are getting inside the new one. That'd cost another 400$ if you wanted that in the old mac pro.
 
Last edited:
12 Core with D700 is 7600$. Not 10K. So you have to buy a Classic Mac Pro with dual processors, 4K, then add 2K costing processors and another 2K costing GPU's, another power supply for a total of 8K+, and yes, you do get better GPU performance compared to the current Mac Pro. The CPU performance will be around the same, memory performance will be much lower and no USB 3.0, no Thunderbolt and it costs more. Just to get better GPU performance for games and CUDA apps, is it really worth it?

I'm not even considering the 256 GB SSD you are getting inside the new one. That'd cost another 400$ if you wanted that in the old mac pro.

My apologies, I'd swear I read $10K for maxed out new one.

My mistake.

In any case, your math was even more off.

The 2009 was $1,100 (current value of used 2009 Octo 2.26)
The Dual 5680s were $1,500
The Dual Titans were $2,000
The PC power supply was $45

So, I've got $4,645 in the winning machine
Vs $7,600 for the machine that LOST

Leaving $3K for a nicely appointed rMBP

There's your FREE Thunderbolt

So, faster machine and a rMBP or slower one by itself.

Tough one.
 
So, I've got $4,645 in the winning machine
Vs $7,600 for the machine that LOST

Ignoring the fact that you might be biased in favor of having cards to sell for old machines, no shame in that mind you, most GPU dependent benches don't really need the maximum core CPU monster you can buy to be honest. There seem to be very few case usage scenarios where you are pounding the CPU cores and the GPU cores as well and if you have a workflow that needs to do either badly you probably have machines dedicated to each particular task. If you need all the GPU power you can get then most likely the quad 3.7 nMP with D700's is the way to go. If you need CPU cores galore the nMP 2.7 twelver with D300's will probably be advisable and not waste any money on GPU's you don't need. If you want something of a jack of all trades the hex core nMP with D700's is a nice compromise.

When you write things like "LOST" all caps it brings to mind kids fighting on PS4 and Xbox boards or AMD vs. Nvidia forums but I doubt gaming is the primary use for many people at all for the old or new MP. Don't get me wrong, I love my old 2006 3.0GHz MP and it has kept fresher than I would have ever imagined this far into it's life with GPU upgraded but the new reality is I intend to treat the nMP like I would MBP's or MBA's where I intend to sell it off and get more resale out of it to buy a new one more often instead of upgrading the old one. I suspect I'm not the only one in that boat.
 
So, faster machine and a rMBP or slower one by itself.

Tough one.

I have to admit: I had a little trouble deciding to post this, Dave. I've done business with you in the past and I'm quite happy with the product I've purchased. I've also recommended you and your services to other Mac Pro owners who didn't realize they had a much larger choice of GPUs for their rigs at the time.

I think you're pushing buttons and trolling, here. I actually think you're smarter than you're letting on, but let's play along for a second.

Clearly, the old Mac Pro, with a pair of (now dated) x5690s will be a CPU powerhouse. It'll be that powerhouse for applications that are properly threaded and don't spend their time constantly context switching. It'll also be a powerhouse for apps that don't need things like Intel's AVX technology; that'll never be available on the old Mac Pro, no matter how hard you try.

And yes, you can augment the power supply situation in the old beast so that you can power up 2 Titans or 2 780Ti GPUs. You'd do this assuming your application can take advantage of more than 1 GPU, and that it needs CUDA processing; note for this discussion I'm flat-out ignoring games. I don't really care about gaming on a Mac and I personally think that those that do are foolish.

With all of this assembled, you have a system that performs well today, with today's applications. It's a tactical decision. And potentially a good one.

I'm an architect by trade (and nature). Tactical thought is the antithesis of what I need to do for my day to day business. Strategic thought is more my thing, and the strategy here is: what app changes are coming in the future? Apple is clearly pushing to relegate nVidia's CUDA to a, "Awww... isn't that cute!" place. OpenCL isn't there yet, today. Apps that have been CUDA-aware for a few years are going to take time to get themselves shifted to OpenCL. But several are already on their way.

Apple wants OpenCL to win, and right now, that means AMD is the choice for GPUs. Love it or hate it, nVidia just can't (or, more accurately: won't) get their OpenCL performance up to the same level. With all that time and money invested in CUDA, why would they?

Ignoring the GPUs for a second, the CPU is something else the old Mac Pro just can't update. It's stuck at the x5690 as the top chip and will never go beyond that. As more and more applications start tapping Intel's AVX to help speed things up a bit, the old Mac Pro's CPUs will seem less and less desirable. Final Cut Pro X has been using AVX since its inception. Adobe has hinted that Premiere Pro is, as well (though I have no proof of that).

The new machine won't do as well in most benchmarks as the old Mac Pro will. And for the time being, there are a bunch of apps that might even run faster on the old Pro, assuming the owner has updated the GPU to something more modern. But it won't always be the case. Count on it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.