Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Are you suggesting that companies like AMD keep chips in massive warehouses hoping to sell them one day?

You have really good point.

They are more likely manufacture them in quantities for orders from OEMs.

It's so rapidly changing market.
They probably perfected production process to have as small number of bad chips as possible and also to manufacture no more than they actually sold.

CPUs are different story because those are not only sold to OEMs but also as stand alone units to end customer. So quantities here are based on OEMs orders and also on selling predictions.
 
Are you suggesting that companies like AMD keep chips in massive warehouses hoping to sell them one day?

Nope. Deal was certainly made before XTL were released to the market. But AMD knows their roadmap. I think, that XT2 was made to fill the gap between XT and XTL and to gain some short-term advantage in "GPU battle" with Nvidia.
 
You have really good point.

They are more likely manufacture them in quantities for orders from OEMs.

It's so rapidly changing market.
They probably perfected production process to have as small number of bad chips as possible and also to manufacture no more than they actually sold.

Of course they are. I consult to one of the big electronics firms and have been doing it for the last 3 and a half years. Storing stuff in warehouses is like a swear word for these companies. Forecasting demand and just in time manufacturing are the keys to profitability.
 
Of course they are. I consult to one of the big electronics firms and have been doing it for the last 3 and a half years. Storing stuff in warehouses is like a swear word for these companies. Forecasting demand and just in time manufacturing are the keys to profitability.

Just In Time indeed, lets hope the est. February Mac Pros arrives in February :rolleyes:

And yes, nothing is wasted in electronics, at most it'll be placed in a lower tier product (cheaper) and will probably have a higher failure rate, not all but just saying the possibilities
 
You get it all wrong

Makes sense... What's your take on the D500... It doesn't seem to offer any added performance on the D300 in most benchmarks that have been run... Why would Apple have even bothered offering both the D500 and D300 given their near identical performance in most benchmarks?

The specviewperf tests tells only half the story, and namely the opengl(or if you want display potency) of the cards. But here is the bigger part of this: the COMPUTING PERFORMANCE. If you look at the numbers, Apple ditched the opengl performance a bit(the memory amount and clock speed) BUT KEPT THE COMPUTATIONAL POWER OF THE CARDS. The d500 is actually close match to w8000 at compute power and d700 close match to w9000. d300 more close to w7000. This is of great importance since w7000 is close in performance to nvidia k5000, a very expensive card. If software will use opencl more and more(let's hope) this counts much more then pure opengl capabilities of the cards(witch is better bang for your buck on amd anyway). So yes, they are pro cards(for visual part of the story) but also they have great opencl capabilities. If you try to run fluid simulations or pyro fx in Houdini(for example) witch can take advantage of multiple cards, then you will see the true advantage. The question is, how many software packages take advantage of this NOW? FCX is one, maybe Houdini for it's fluids and pyro fx(if side fx updates they MACOS distro maybe even more so), but don't know for others(Mari from what i heard?). Anyway, a pair of w9000 fire pro's will cost you an arm and a leg, so if you DO have an app that take advantage of this dual setup, then it's a great deal to take the dual d700(or let's face it even d500 - w8000). Let's wait and see where and when the software producers move on this, but i think i will budget one of this babies for next iteration.:D
 
The specviewperf tests tells only half the story, and namely the opengl(or if you want display potency) of the cards. But here is the bigger part of this: the COMPUTING PERFORMANCE. If you look at the numbers, Apple ditched the opengl performance a bit(the memory amount and clock speed) BUT KEPT THE COMPUTATIONAL POWER OF THE CARDS. The d500 is actually close match to w8000 at compute power and d700 close match to w9000. d300 more close to w7000. This is of great importance since w7000 is close in performance to nvidia k5000, a very expensive card. If software will use opencl more and more(let's hope) this counts much more then pure opengl capabilities of the cards(witch is better bang for your buck on amd anyway). So yes, they are pro cards(for visual part of the story) but also they have great opencl capabilities. If you try to run fluid simulations or pyro fx in Houdini(for example) witch can take advantage of multiple cards, then you will see the true advantage. The question is, how many software packages take advantage of this NOW? FCX is one, maybe Houdini for it's fluids and pyro fx(if side fx updates they MACOS distro maybe even more so), but don't know for others(Mari from what i heard?). Anyway, a pair of w9000 fire pro's will cost you an arm and a leg, so if you DO have an app that take advantage of this dual setup, then it's a great deal to take the dual d700(or let's face it even d500 - w8000). Let's wait and see where and when the software producers move on this, but i think i will budget one of this babies for next iteration.:D

Very good point indeed, but please paragraph it next time, it'll be easier on the eyes :)

And since you're mentioning OpenCL, AMD run circles around Nvidia when it comes to OpenCL support.

I'm naively hoping that more 3D rendering software will support OpenCL in the future as it's compatible with either AMD or Nvidia card (and it'll probably force Nvidia to get their act together when it comes to OpenCL performance).
 
Compute performance

OpenCL compute performance seems the main target of Apple, but they have not included ECC support.

This is probably ok for rendering but for scientific/financial calculations and simulations ECC is important.

Also, if you're just looking at OpenCL performance the much cheaper consumer cards are equally good at it apart perhaps from the cooling/continuous running point of view. In this area ECC support is one thing that differentiates the FirePro cards. The other is memory but the 7950/7970 have 3GB.

Noise/Size apart, for compute tasks where you don't care about ECC a standard workstation with 7970s would be much cheaper than a nMPro with D700s.
 
When it comes to 3D Rendering, OpenCL is not an ideal choice as it has far higher overhead than OpenGL.

OpenGL is focused around generating data and export it.
OpenCL is focused around comparing data and manipulating it.

That is not to say that there isn't some significant overlap between the two though.

OpenCL can do some pretty cool stuff when it comes to multimedia cropping, scaling, rotation, etc. as well as comparing samples to other samples that reside within surrounding frames that simply has not been implemented within OpenGL.

Really, it is more the case of "choose the appropriate tool for the job".
 
Yeah, if my memory serves me right, this was exactly the case in Intel Core2Duo and Intel CoreDuo. Core Duo is basically the rejected (ok, that sounds a lil harsh, let's call it, didn't meet the required spec :p) version of Core 2 Duo

I think that you have the right idea, but are mixing up names.

Mobile "Core Duo" was the 32-bit Yonah 65nm architecture with two cores, "Core Solo" was the Yonah with a single core. Since they had the same size die with the same number of transistors, a Solo was almost certainly a Duo with one defective core.

Mobile "Core 2 Duo" was the 64-bit Merom 65nm architecture with a larger die and almost twice as many transistors.
 
You are looking at "flipped off" from the wrong angle.

Yield defects at the 25% range? If so TMSC is making a mint (since they charge per wafer processed ) . Sure there are defects that get binned but also cannot count on binned defects to fill a number. Some of those are also configured to fit dies also.


Apple is probably getting a great price on GPUs that ATI would have thrown in the trash for being too slow or too few working cores.

Short term gimmick but long term that will pose problems. As with the chipset with 10 SATA lanes they don't need, there is definitely "make do with what can get" components in this first model that largely would need far better aligned replacements in future versions.
 
But here is the bigger part of this: the COMPUTING PERFORMANCE. If you look at the numbers, Apple ditched the opengl performance a bit(the memory amount and clock speed) BUT KEPT THE COMPUTATIONAL POWER OF THE CARDS. The d500 is actually close match to w8000 at compute power and d700 close match to w9000. d300 more close to w7000.

Not sure what numbers you are looking at.

W8000 3.2TFLOPS
http://en.wikipedia.org/wiki/Comparison_of_AMD_graphics_processing_units#FirePro_Workstation_Series

D500 2.2TFLOPS
http://www.apple.com/mac-pro/specs/

That isn't even close if playing "horseshoes and hand grenades" .

D300 is hand grenade close ( 2TFLOPs versus 2.4TFLOPs -16% )
D700 is hand grenade close ( 3.5TFLOPs versus 3.9 TFLOPs -10% )

Although those Apple numbers are bit suspect of being performance of a single GPU card rolling downhill with a hurricane tailwind. The AMD ones a bit too but good luck with the Apple ones when the GPU and CPUs are this tightly coupled.



.... but also they have great opencl capabilities. If you try to run fluid simulations or pyro fx in Houdini(for example) witch can take advantage of multiple cards, then you will see the true advantage.

Fluids were accuracy matters not so much. Something that visually resembles a fluid then yeah they are fine.
 
I think that you have the right idea, but are mixing up names.

Mobile "Core Duo" was the 32-bit Yonah 65nm architecture with two cores, "Core Solo" was the Yonah with a single core. Since they had the same size die with the same number of transistors, a Solo was almost certainly a Duo with one defective core.

Mobile "Core 2 Duo" was the 64-bit Merom 65nm architecture with a larger die and almost twice as many transistors.

Yeah, I got it mixed up, thanks for the correction! ;)
 
Yield defects at the 25% range? If so TMSC is making a mint (since they charge per wafer processed ) . Sure there are defects that get binned but also cannot count on binned defects to fill a number. Some of those are also configured to fit dies also.

75% good would be fantastic!

http://en.wikipedia.org/wiki/Semiconductor_device_fabrication

Device test

Main article: wafer testing

Once the front-end process has been completed, the semiconductor devices are subjected to a variety of electrical tests to determine if they function properly. The proportion of devices on the wafer found to perform properly is referred to as the yield. Manufacturers are typically secretive about their yields, but it can be as low as 30%.
 
Hmm, anyone has any idea what 3D rendering GPGPU software makes use of OpenCL? So far I only find V-Ray.

Really wished Keyshot includes OpenCL in their future update
 
Very good point indeed, but please paragraph it next time, it'll be easier on the eyes :)
And since you're mentioning OpenCL, AMD run circles around Nvidia when it comes to OpenCL support.
I'm naively hoping that more 3D rendering software will support OpenCL in the future as it's compatible with either AMD or Nvidia card (and it'll probably force Nvidia to get their act together when it comes to OpenCL performance).

I quoted, not sure why not appeared. Yea, this is why i mentioned Houdini. They only offer OpenCL support for PyroFX and Fluid simulations. Let's hope more will follow.
 
" Fluids were accuracy matters not so much. Something that visually resembles a fluid then yeah they are fine"

I am not sure you understood. In HoudiniFX(formerly Houdini Master) some of the computational very intensive tasks, such as fluid simulations and pyro effects(smoke, fire, dust, debris etc) are done on the GPU side using OpenCL. This is where the W9000 gets in very handy with both opengl and opencl fantastic capabilities. Good luck purchasing two of these though. One card is about $ 4,5k in Europe and 3k in US. If you take dual d700 CTO they cost more then the computer, so it's a great deal to say the least. When you look at it this way, you wish more software can leverage the use of this babies. I think Apple hopes this too. So far we can count: Houdini; FCPX; Mari to take advantage of the dual card for visual AND computation. If you guys know more software who can use them, just list it. I know i will purchase one MP. Not sure if this iteration or version 2, but will definitely take one with dual w9000.
 
I think it is rather optimistic to compare the D700s with the W9000.

They seem to me to be closer to something like the S9000 but without ECC memory (the S9000 is W8000 with more memory I think).

It is the openGL performance of the D700s that is critical. If they really are close to the W9000 then it seems that AMD will have underpriced themselves and have to reduce the cost of the W9000 substantially.
 
I think it is rather optimistic to compare the D700s with the W9000.

They seem to me to be closer to something like the S9000 but without ECC memory (the S9000 is W8000 with more memory I think).

It is the openGL performance of the D700s that is critical. If they really are close to the W9000 then it seems that AMD will have underpriced themselves and have to reduce the cost of the W9000 substantially.

We shouldn't really compare those cards one D700 equals one W9000.
We get two Firepro cards in the system by default.

Real comparison will be system to system not card to card.
But for that we need to wait until actually software catches up.

It is more possible nMP cards were split due to better heat management than to actually increase performance of the system. And probably they never intended the 1xD700 = 1xW9000 comparison
 
I've been watching this thread with interest as I have been considering moving from my hackintosh setup to the new Mac Pro. I tried the Specview12 benchmark to see if there was any advantage to having 2 gpus vs one. I ran the benchmark in crossfire mode and with one of the cards physically removed. It appears that Specview does not take advantage of the second card (at least with the gaming drivers used with the HD7970). Maybe the Specview benchmark is not a fair measure of the Mac Pros capability if it can't use multiple GPUs.
 

Attachments

  • Crossfire.jpg
    Crossfire.jpg
    231.1 KB · Views: 161
  • Single.jpg
    Single.jpg
    236 KB · Views: 135
Maybe the Specview benchmark is not a fair measure of the Mac Pros capability if it can't use multiple GPUs.

It's not, but that's really more up to the software developers than anything else. We're not really as interested in the raw horsepower of the Mac Pro than we are in how it performs in the specific applications we use.
 
I've been watching this thread with interest as I have been considering moving from my hackintosh setup to the new Mac Pro. I tried the Specview12 benchmark to see if there was any advantage to having 2 gpus vs one. I ran the benchmark in crossfire mode and with one of the cards physically removed. It appears that Specview does not take advantage of the second card (at least with the gaming drivers used with the HD7970). Maybe the Specview benchmark is not a fair measure of the Mac Pros capability if it can't use multiple GPUs.

Hmm, that's interesting, the result you got. I'm surprised your score is higher then the D500. Anyone care to explain why? Cause in Specviewperf 11, the result difference between Gaming and Workstation GPU is very different.

03-OpenGL-SPECViewperf11-05-ProE-05.png


03-OpenGL-SPECViewperf11-06-Solidworks-03.png
 
I've been watching this thread with interest as I have been considering moving from my hackintosh setup to the new Mac Pro. I tried the Specview12 benchmark to see if there was any advantage to having 2 gpus vs one. I ran the benchmark in crossfire mode and with one of the cards physically removed. It appears that Specview does not take advantage of the second card (at least with the gaming drivers used with the HD7970). Maybe the Specview benchmark is not a fair measure of the Mac Pros capability if it can't use multiple GPUs.

A lot of those apps are not multiGPU aware. The reason a lot of us were looking for benchmarks was to see if they had the incredible performance boost provided by the others in the FirePro line.

For what they're capable running other apps, clearly we'd have to see the individual benchmarks.

With all these inconsistent performance results, it isn't clear which workstation GPU is better or worse in general terms, it really does depend on the task.

The 7970, for instance, should beat the nMP at FCPX as it's the same GPU with higher clock, but it does not.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.