Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
This is old news (I hate when people say that :) ).

D700 is a significantly underclocked W9000.
 
The D series seem closest to the AMD S series (Server rather than Workstation).

The D700 is very close in spec to the S9000:

http://www.amd.com/uk/products/workstation/graphics/firepro-remote-graphics/S9000/Pages/S9000.aspx

The S series like the D series are designed for GPGPU computing rather than having the drivers for 3D professional applications that the W series have (and which is the main reason for their high price tags).

Of course it may turn out that the D series do work with the pro drivers under Windows - need to wait for the hands on reviews.
 
Of course it may turn out that the D series do work with the pro drivers under Windows - need to wait for the hands on reviews.

I desperately hope it will, as I'm hoping to use SolidEdge with this machine. I also do video and photo where I use FCPX and Aperture hence why having the ability to use those softwares in one machine is very tempting.
 
Significantly? 975 vs 850 isn't that significant.

For graphics cards, this is pretty significant in terms of performance. Generally speaking, they scale better than CPUs do since clock speeds are lower. That said, I think they're perfectly adequate and it's a very fair compromise to fit it in the thermal profile of the Mac Pro. I'd guess that's the reason for downclocking.
 
For graphics cards, this is pretty significant in terms of performance. Generally speaking, they scale better than CPUs do since clock speeds are lower. That said, I think they're perfectly adequate and it's a very fair compromise to fit it in the thermal profile of the Mac Pro. I'd guess that's the reason for downclocking.

See the following ...

I'm more than happy to give up 12% in clock speed

According to the article above it's approximately 12% slower in terms of tflops performance, 3.5 tflops versus 4. HOWEVER, you get TWO of them on the nMP, whereas if you bought them as regular AMD cards they'd cost you a small fortune. So the net effect is that for most people (who aren't buying two AMD workstation cards at 3.5k each) it's a massive win. 7 tflops versus 4, or a 43% improvement.
 

That article explains nothing and brings no new info at all. They put the specs from the Apple site, which have been available for a long time, into a table. Great journalism

----------

Basically a question between: performance per watt (D700) vs max performance (W9000)

Indeed. For most of us it is no surprise that Apple have not yet found ways to beat basic laws of thermodynamics. A compromise was always on the table.
 
Indeed. For most of us it is no surprise that Apple have not yet found ways to beat basic laws of thermodynamics. A compromise was always on the table.

Yeah fair enough. However we don't know the thermal capabilities of that heatsink and case, it is imaginable that they wouldn't have to throttle back at all. There's a lot of factors; power budget, heat budget, noise budget, materials, size constraints, chip yields, marketing etc. It's conceivable that they simply under clocked the chips by a bit for marketing, general stability and chip binning.

As I say however they're handing us 7 tflops for a good price, and that's obviously what they were gunning for.
 
For people wondering about throttling due to power/heat limitations, keep in mind that it's going to be pretty damn hard to find a real-world workload that actually pegs the CPU and both GPUs at max on a continuous basis. Such a workload would need to have the following properties:

  • It scales perfectly to any number of CPU cores, without being limited by serial task components or memory bandwidth.
  • It scales perfectly to any number of GPU cores, without being limited by available VRAM, VRAM bandwidth, or PCIe bandwidth, and without caring that the GPU cores are divided between cards.
  • It's perfectly 'balanced' between CPU and GPU given the particular combination of GPU and GPU in that machine — whatever CPU component it has runs fast enough to not slow the GPU component, and vice vera.

It will be trivial to construct a synthetic benchmark with these properties. Since a synthetic benchmark isn't doing any real work, there are no interdependent task components to worry about. But actually pressing this system hard enough to force any sort of throttling in the real world will probably almost always require running more than one entirely independent task, something like running one app with a dual-GPU aware GPU-based rendering engine going full tilt while simultaneously encoding unrelated content (not, say, the output of the aforementioned rendering engine) to H.264 in another app.
 
Significantly? 975 vs 850 isn't that significant.

850 is the boost clock, 650 is base clock.


edit: 13% from a base clock to boost clock--that's significant. Under workstation loads, we'll see how well it keeps the speed up. The massive underclock on the base clock was clearly about heat. Looks like the thermal core isn't as amazing as the hype would indicate.
 
Last edited:
According to the article above it's approximately 12% slower in terms of tflops performance, 3.5 tflops versus 4. HOWEVER, you get TWO of them on the nMP, whereas if you bought them as regular AMD cards they'd cost you a small fortune. So the net effect is that for most people (who aren't buying two AMD workstation cards at 3.5k each) it's a massive win. 7 tflops versus 4, or a 43% improvement.

So... we're in agreement that they scale well? That means upwards with clock increases *and* downwards as well.

I didn't say I disapproved, it's a good tradeoff, not even for price necessarily, but for the fact that the lower TDP makes this form factor possible at all. I doubt the pricing is anything from AMD specifically so much as it is Apple doing the custom boards so they can price more generously. In many ways, this is less of a "good deal", since a 7970 still costs less even in today's inflated market, than it is an indictment of how AMD/NVIDIA overprice their workstation cards so much. Still, it's great to see Apple make GPU performance and very, very reasonable prices. It's obvious what Apple thought was the future going forward.

Also, the ability to use two GPUs depends on usage. In some cases, it will scale nearly perfectly, but many are inefficient and sometimes, dual GPU utilization is nonexistent altogether.
 
See the following ...



According to the article above it's approximately 12% slower in terms of tflops performance, 3.5 tflops versus 4. HOWEVER, you get TWO of them on the nMP, whereas if you bought them as regular AMD cards they'd cost you a small fortune. So the net effect is that for most people (who aren't buying two AMD workstation cards at 3.5k each) it's a massive win. 7 tflops versus 4, or a 43% improvement.

Technically, a 75% improvement :)

4 * 1.75 = 7

But your point is well-taken. It is not a net decrease in improvement. In fact, you get an extra 3 terraflops of performance (if the software can use it) for less money overall.
 
850 is the boost clock, 650 is base clock.

I didn't take the base clock much into consideration just as I don't care about the base clocks of the CPU's, since they mostly run on turboboost unless you utilise all of the cores for a while. It'll be similar with the Mac Pro. Since most apps use a single GPU, the other will be idle, the one you are using will be on boost mode.
 
I didn't take the base clock much into consideration just as I don't care about the base clocks of the CPU's, since they mostly run on turboboost unless you utilise all of the cores for a while. It'll be similar with the Mac Pro. Since most apps use a single GPU, the other will be idle, the one you are using will be on boost mode.

Doubt that will happen with this power restricted Mac Pro.
 
I didn't take the base clock much into consideration just as I don't care about the base clocks of the CPU's, since they mostly run on turboboost unless you utilise all of the cores for a while. It'll be similar with the Mac Pro. Since most apps use a single GPU, the other will be idle, the one you are using will be on boost mode.

With single GPU tasks, the power/heat should be manageable--it'll likely hit close to the boost clock with only a 12% drop in Hz. Luxmark dual GPU benchmarks nearly max out the PSU though. I'm not sure at that point it's going to be doing that well after a few minutes of use. Heaven forbid you use the CPU at the same time.

I guess for most workflows, it wont be too bad of a performance hit.
 
With single GPU tasks, the power/heat should be manageable--it'll likely hit close to the boost clock with only a 12% drop in Hz. Luxmark dual GPU benchmarks nearly max out the PSU though. I'm not sure at that point it's going to be doing that well after a few minutes of use. Heaven forbid you use the CPU at the same time.

I guess for most workflows, it wont be too bad of a performance hit.

So far it seems that even in boost mode and CPU running on full, these things will be well below the PSU limit. The CPU is around 130 W, dual Firepro's around 108W each, that's a total of 346W. Yes there are other things requiring power as well but not as much as 100W more. I think these things will be running mostly on boost mode, even when used together.

But of course we'll have to wait and see for the benchmarks.
 
So far it seems that even in boost mode and CPU running on full, these things will be well below the PSU limit. The CPU is around 130 W, dual Firepro's around 108W each, that's a total of 346W. Yes there are other things requiring power as well but not as much as 100W more. I think these things will be running mostly on boost mode, even when used together.

But of course we'll have to wait and see for the benchmarks.

The German Review ran luxmark (OpenCL) and it went up to 438W. I'm guessing the ~100W TPD is at the base clock.

The current measurements showed a maximum power of 438 watts at Luxmark, with normal applications in parallel mode we only came to 230 watts.
 
That's good news. That means that even during Luxmark, they ran in boost mode unless there was something else drawing power.

That fits exactly with the projections in the previous thread about power consumption. With GPU only, it barely fits in there. If the CPU were significantly active at the same time, the performance hit would be immediate and severe.

Also, since it's using 438W of a 450W maximum, the addition of bus-powered TB and USB devices (up to 75W total) could impact performance.

I addition, I'd like some real-world benchmarks (running the test for several hours) to see if it still holds up when the machine is warm. It's unclear if these benchmarks are from a cold start--in this case, the numbers could be quite different.

It's possible it can maintain low temperatures and it's possible there aren't a lot of Dual GPU+CPU use-cases. As a practical matter it simply may not be an issue.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.