Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Correct, the question is where is that threshold and will it be hit?

With 450W and a 130W CPU that leaves 320W. Say 20W for support, that leaves 150W for each GPU. Most 7970's are specced for 250W, so that leaves us with a seeming 100W deficit. However, regular cards have over clocking headroom which Apple doesn't need to support. I bet that those GPU's, at that clock really only need 150W.

Can't imagine many workloads pegging all three components at the same time, maybe they're banking on that? :)
Regular cards don't allow for overclocking headroom, they're just chips that can take more voltage and run stable at a higher clock assuming adequate cooling (it's voltage that determines the power consumption).
I haven't looked close enough, the nMP just seems like an unnecessary compromise in so many areas to me, but there's nothing stopping apple and their qa from selecting well binned components that run relatively high clocks at a relatively low voltage.
Maybe it can handle everything at 100%, but it's still underclocked compared to a cheap reference card, dictated by form factor. Still, like I said, 450w seems especially small allowing for peripherals, tolerance, capacitor wear etc :)


I missed that the topic was about the nMP AMD cards in specific, I thought it was just about the nMP.
...
However, the present and future cards from AMD will not require a physical Crossfire bridge between the cards. See here.

Looks like you missed his post that you quoted too? :) - He specifically mentioned all this.
People are saying that crossfire is supported so, bearing in mind that the Dx00 series are based on amds old architecture, the physical bridge is present in the nMP.
 
....
I agree and also think that think these will be the only cards for this iteration of the nMP, but I don't see a long term trade-off - Current generation gpus use the bus directly so keeping exactly the same connector is only going to leave some unused pins?

Conceptually they can use a subset, but if made an exception for Crossfire they could also swing for SLI on next design iteration (presuming Nvidia did about face on slacking on following new OpenCL standards ) . If the connector keeps flip-flopping around then the long term is that all the GPU cards will be confined to that same iteration of Mac Pro they came out with.

If the connector is defacto standard ( isn't varying all the time) then conceptually can engage Apple on topic of cards upgrade possibility. If flux all the time that isn't even going to get started.




....
But you are correct in that the proprietary AMD cards themselves could be missing the xfire support, like in terms of firmware support.

Presuming the Windows Fire Pro drivers don't display options in the FirePro control panel that aren't present, then the connections are done in these D300/D500/D700 series of cards. The video posted earlier has toggles to turn Crossfire On/Off. If the bridges weren't connected then probably shouldn't present the toggle at all or at very least it should be entirely greyed out.

However, the present and future cards from AMD will not require a physical Crossfire bridge between the cards. See here.

It actually isn't the "present" cards. The majority of AMD's present line up predates this card you are pointing at. More accurately it is probably most of the new AMD cards going forward from the present.

There is little so far to not indicate these Apple cards are tweaks on the older designs. These highly indicative of existing GPU packages put onto new format of physical cards.
 
Why would you just compare quadros? That's just a word/brand.

not if you're a professional using pro software under windows. having issues in Maya with a non-quadro or firepro? you're not supported as far as autodesk is concerned.
under windows, only the pro cards are supported on pro applications, and even then, only when they're using the pro drivers.
 
not if you're a professional using pro software under windows. having issues in Maya with a non-quadro or firepro? you're not supported as far as autodesk is concerned.
under windows, only the pro cards are supported on pro applications, and even then, only when they're using the pro drivers.

Taken slightly out of context considering the discussion, right? :)

I was about to thank you for stating the obvious, but seeing as I don't run any of these applications I thought I'd quickly google your example to see if my flakey-hearsay knowledge is out of date - Maya 2014 doesn't list any specific graphics under their requirements (and 30seconds of extra clicking shows at least two consumer models that are both recommended and certified for use).
 
Taken slightly out of context considering the discussion, right? :)
dunno- you brought it up.

Maya 2014 doesn't list any specific graphics under their requirements
requirements aren't what we're concerned with here- we need to check what's certified. that's what determines whether you can get support from autodesk or not.

(and 30seconds of extra clicking shows at least two consumer models that are both recommended and certified for use).
the titan and the geforce 690, which was a dual GPU on a card kepler monster. so? those are both special cases and special cards. autodesk does make exceptions to their policy from time to time.
 
dunno- you brought it up.

Yes, as a valid point in the middle of a discussion - Hence my out of context comment :)

the titan and the geforce 690, which was a dual GPU on a card kepler monster. so? those are both special cases and special cards. autodesk does make exceptions to their policy from time to time.

I only looked because not so long ago I'd been told that autodesk employees often use consumer cards.
Like I said I don't use this application, but I assume if your running a system that meets the 'system requirements' on a product page, autodesk won't refuse support? Unlike previous versions, there's no mention of a certified card being required. Also, I don't understand why you're considering those two cards as 'special cases'.



Edit: You edited your post -

requirements aren't what we're concerned with here- we need to check what's certified. that's what determines whether you can get support from autodesk or not.

Maybe that's the case, but that's not what the requirements page or (conflicting) support FAQ implies to me - Even if I've never used one of their products, it's still interesting to me.
 
Last edited:
Got it, yeah, do they scale back under load?

No, the base clock of these cards is 650mhz, when others of the same chipset like the 280X and 7970 GE are getting >1000Mhz. I'm not saying they scale back under load, they just don't get all the way to the 850mhz Boost clock

I should be surprised as then it would seemingly contradict the 7 Tflops spec, as you only get that under, presumably sustained, full load.

"Up to 7TFLOPS" -- not contradictory if the boost clock adds up to that, just a bit misleading if it can't be sustained. This misdirection is definitely not outside the realm of possibility (I'd argue: especially for Apple).

Another hint is that Apple has carefully tweaked these parts. Special clocks, cores etc. It seems likely they did this to manage the thermal envelope. If they scaled back under load why not scale up otherwise?

I sincerely doubt Apple and their special AMD deal could negotiate better technology than AMD would make commercially available to its consumers. I think it's far more likely the <150W powerdraw and reduced thermal output is mostly through downclocking. Binning is amazing, but not that amazing!

Finally, I work in a similar industry, and I know that thermals are one the highest concerns. Hardware engineers don't usually play games with this; they set the ceiling at the peak performance under full load, and include a guard band. I doubt the Apple engineers would gimp this, but who knows, Apple engineers do "think different".

Apple already nerfs performance in some of their laptops when GPU and CPU are at high loads at the same time. It's really not a matter of "would they do it" just a matter of "are they doing it again with this machine."

So again, let's wait for the hot benchmarks :)
 
Please explain this - do you think that ECC is some process that runs at boot time?

on GPU cards it is. It is a synthesized feature on top of "normal" VRAM. [Although I highly doubt there is very much if any boot overhead in this past that needed for straightforward "power on" tests of the memory anyway. ]
 
I'm betting on no-bridge Crossfire. I'm just not seeing enough connectors back to the main board to think that there is a physical bridge there.
 
I'm betting on no-bridge Crossfire. I'm just not seeing enough connectors back to the main board to think that there is a physical bridge there.

There are about 320 pins on that connector from my count, IIRC.

I also doubt its bridge CF, too easy to do via DMA.
 
I'm betting on no-bridge Crossfire. I'm just not seeing enough connectors back to the main board to think that there is a physical bridge there.

Seems like more than a few.. (photos from http://blog.macsales.com/22108-new-mac-pro-2013-teardown )

macprotd_06.jpg


rectangles facing up hanging over the edge of the bottom above.

macprotd_11.jpg


But yeah. at least x16 PCI-e worth. the DisplayPort output lanes , and probably x4 PCI-w worth for SSD , and a couple for more power management and housekeeping are going to add up to more than a few.

It looks there is more coming off the GPU cards than there is coming off the CPU card's edge connector. It is more like a socket design where can pack more in that can just with edge pins.
 
Last edited:
And as Fire Driver is what make W9000 expensive, I believed that Dxx will be support as Radeon and not Fire

What you mean by this? If it's Radeon, why does in the video is shows fire driver?
 
It's not ECC, I sent an email to Phil and asked, received a reply back from Douglas Brooks the Mac Pro project manager, he confirmed no ECC memory for the GPU.

Even in the d700? Or just the 300/500? Nobody expected ecc in the 3/5 but did in the 7.
 
So if CF is proved to be working out-of-the-box in Windows, what is missing for this to be supported in OS X too ? I mean, if the h/w is there, is it just a matter of drivers ?
 
So if CF is proved to be working out-of-the-box in Windows, what is missing for this to be supported in OS X too ? I mean, if the h/w is there, is it just a matter of drivers ?

Yep, same as it always has been.
 
So if CF is proved to be working out-of-the-box in Windows, what is missing for this to be supported in OS X too ? I mean, if the h/w is there, is it just a matter of drivers ?

Not that I do a lot of gaming, but would CF work with for example VMWare under OS X? Sure, gaming through viritualization isn't optimal, but not terribad either - I've seen figures of about 5-10% performance loss. If CF is enabled when using Win8 through VMWare, wouldn't gaming be faster than OS X native on the nMP? (Yes, I know booting to Win8 for gaming is best, but it would be fun to know if it works)
 
Not that I do a lot of gaming, but would CF work with for example VMWare under OS X? Sure, gaming through viritualization isn't optimal, but not terribad either - I've seen figures of about 5-10% performance loss. If CF is enabled when using Win8 through VMWare, wouldn't gaming be faster than OS X native on the nMP? (Yes, I know booting to Win8 for gaming is best, but it would be fun to know if it works)

Sadly this cannot work, as the virtual machine can see only what the host offers. You cannot install AMD or nVidia drivers for specific GPUs on a vmware. The virtual machine will report a generic video adapter. 3D acceleration is possible but only through the vmware's/parallel's drivers (that correspond to the drivers the host operating system uses).

Regardless, if it is only a matter of drivers now, I hope that CF will be enabled on OS X very soon. It would be a shame otherwise.
 
Apple already nerfs performance in some of their laptops when GPU and CPU are at high loads at the same time. It's really not a matter of "would they do it" just a matter of "are they doing it again with this machine."

So again, let's wait for the hot benchmarks :)

And other manufacturers don't? :rolleyes: :confused: You realise that "nerfing" the CPU is Intel's turbo boost working as it is supposed to?
 
Sadly this cannot work, as the virtual machine can see only what the host offers. You cannot install AMD or nVidia drivers for specific GPUs on a vmware. The virtual machine will report a generic video adapter. 3D acceleration is possible but only through the vmware's/parallel's drivers (that correspond to the drivers the host operating system uses).

Regardless, if it is only a matter of drivers now, I hope that CF will be enabled on OS X very soon. It would be a shame otherwise.

Thanks for the explanation, I suspected this was the case. Oh well, let's just hope for updated drivers for OS X in a near future then. I suspect this will come soon, as leaving an entire GPU more or less unused for tasks than can benefit from CF affects not only performance, but in the long run the reputation of the nMP as a top-of-the-line product.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.