Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

macpro2000

macrumors 65816
Original poster
Feb 23, 2005
1,354
1,156
I'm going to order a Hex, but it appears as though no one can say if there is truly any benefit of the d500 over the D300 in true performance. Any new news anybody knows about?
 
Last edited:
It depends on your definition of 'true performance'. What do you need your GPU to do? If games only then don't bother and save your money. If looking ahead at GPGPU capability and VRAM and double float math then get the D500. I would get the D700 and leave nothing amiss.
 
Would you please be able to tell me what kind of apps utilize "float math"? Thanks!
 
It depends on your definition of 'true performance'. What do you need your GPU to do? If games only then don't bother and save your money. If looking ahead at GPGPU capability and VRAM and double float math then get the D500. I would get the D700 and leave nothing amiss.

Exactly right.

If you are interested, based upon the spreadsheet somebody posted, the most popular configuration among board members is 6-core, 32 GB, D700, 1TB.

In my case, I opted for D700s because I knew that although you can upgrade RAM, SSDs and even the CPU down the road, we don't know how easy it will be to get upgraded GPUs down the road.
 
If you use FCPX, the D500s (with 6-core) will give you a significant advantage (vs D300s with 4-core)... http://www.barefeats.com/tube05.html

How much of that is attributed to the GPU vs CPU? Still tough to call.

If you don't use FCPX, you can save a bit and get the D300s, or you can spend a few hundred bucks on the D500s anyway for a bit of extra VRAM and added double-precision compute performance should you need it down the road.

It's a $350-$400 decision. Ask yourself, are you more likely to regret spending that or not spending that?

I still think the 6-core, D500 combo is the sweet spot of this generation. Buying anything else really requires some serious justification. :)
 
That benchmark bugged me out at first, but it goes to show you how many variables there are in measuring performance.

In their benchmark, they compared all three cards, but they used three different CPU configurations. The 12 core was slowest because those steps likely used single threads, and the 12 core has the lowest clock rate. Had it been something multi-threaded, the 12 core would've done better.

I wish I could see a benchmark that used the same number of cores with each of the three cards, so you can tell which performance gains/losses were due to the GPU alone.

Then, a second benchmark would be the same GPU, with 4, 6, 8 and 12. Then we would know which tasks go faster with more cores.
 
How about the 700s? How do the numbers on that compare to the others?

On pure horsepower as measured by Teraflops per second, the numbers are:

D300 2.0 Teraflops/s per card
D500 2.2 Teraflops/s per card
D700 3.5 Teraflops/s per card

But because not every program uses GPUs, much less 2 GPUs, you may or may not notice the speed between the cards.

The trouble is, every benchmark is a mixture of configurations. Some are 4, 6, 8 or 12 cores. Each one is at a different clock rate (The more cores you have, the hotter it gets, so they have lower clock rates). Meanwhile, when a program uses only one core, the 4 core kicks ass... but when it uses more cores, the 12 core kicks ass.

So it's common to see the 4 core D300 beat the 12 core D700. That's why I would love to see the same batch of tests run on the same number of cores, but with each of the 3 GPUs.
 
I'm going to order a Hex, but it appears as though no one can say if there is truly any benefit of the d500 over the D300 in true performance. Any new news anybody knows about?

You wont like my answer, but the answer is....it'll depend on what kind of software you use and whether that software is optimized for faster clock speed or other factors.

Cause if you look at the D300 and D500 benchmarks floating around, the result could either be a huge difference, slight difference or with some, the D300 actually beats the D500. So really, it'll depend on what software you use and if they developer would actually even bother to properly support the Dxxx cards.

It sucks but lets put things into perspective here, SLI and Crossfire has been around for a very long time but how many apps actually REALLY make use of that feature? At least on the Windows side of things. On the bright side, because it's a Mac, things might be different cause just look at how many apps has been updated to be Retina ready and OpenCL compatible, so the situation might be different for Macintosh, but this is just a speculation ;)
 
Cause if you look at the D300 and D500 benchmarks floating around, the result could either be a huge difference, slight difference or with some, the D300 actually beats the D500.

Technically, assuming you are using the same CPU configuration, the D300 can never beat the D500 and the D500 can never beat the D700. The best they can do is tie.

However, you are correct in some benchmarks it may seem that the D500 is beating the D300, but what it really says is sometimes the Quad Core will beat the 6, 8 or 12 core. They just happened to use different GPUs in each one.

If the 12 core had the D300, and the 4 core had the D500, the quad core still would've beat it, but not because of the GPU.

The only way to know for sure is for them to test each of the 3 GPU options with all other options being the same (RAM, Cores, SSD).

So, if they tested a 4 core D300, next to a 4 core D500 and a 4 core D700, we would know which card was faster for each task.

In applications that are fully GPU enabled, they should show increased results from D300 - D500 - D700, in that order (at best).

In non-GPU enhanced programs, all three would tie.

Given the same specifications for the rest of the system, I don't think there would be any other outcome.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.