Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
AMD's reason for the Bridge was due to PCIe 1 and 2.0 not having enough bandwidth. That was their reason when asked why Hawaii suddenly doesn't need it. Apparently Tahiti now with PCIe 3.0 don't need a bridge either, although it depends on the both the motherboard and card itself.

As far as I know the 280x (tahiti) requires a crossfire bridge, the only ones that don't are the Hawaii based cards like the R9 290.

Edit: just looked it up from a couple informal sources which verify this.
 
Last edited:
As far as I know the 280x (tahiti) requires a crossfire bridge, the only ones that don't are the Hawaii based cards like the R9 290.

Edit: just looked it up from a couple informal sources which verify this.

Interesting, it seems to fly right against that which AMD said at their Livestream for the GPU's. Maybe they just didn't want to bother doing even a slight redesign of their older cards, and just went for the Bios and ID flash. 7970 to 280X and all that.

Well I hope the software developers, and/or AMD manage to leverage that second GPU to its fullest.

The main issue with Premiere so far was that it doesn't use a second GPU either, even in Windows. I know it's extremely rare for game devs to even bother with multiple GPU's, and usually just do the bare minimum in added SLI, or XFIRE support, leaving all the real work for the driver teams to sort out.

It's taken some Devs years to even bother trying to make games multi-threaded for CPU's. The Old Republic was hilarious when they couldn't properly manage it, and instead forced the game to run two executables/launchers with a third piece of software to interpret the two. All just to try and make it use more Cores.
 
,However, just in case you're right I've got plenty of open double wide PCIe slots for R280Xs and R290Xs if OCL really takes off.
Better yet, if you're running NVIDIA hardware, you already have OpenCL support, too. So you won't be left out in the cold, like I would if I bought the nMP. NVIDIA users can transition between CUDA and OpenCL at their own pace. AMD users are totally dependent on developers timelines.
 
Better yet, if you're running NVIDIA hardware, you already have OpenCL support, too. So you won't be left out in the cold, like I would if I bought the nMP. NVIDIA users can transition between CUDA and OpenCL at their own pace. AMD users are totally dependent on developers timelines.

So true. However, I'd be partial to AMD for OCL if it really covered the bases that I need. AMD is to OCL like KFC used to be to chicken. When you do one thing, you'd better get it right and better than your competitors.
 
So true. However, I'd be partial to AMD for OCL if it really covered the bases that I need.
Yup, me too. I just think it's gonna take at least a year or two to really see where things are at. My NVIDIA card(s) will let me get my work done either way while the dust settles. Then who knows...I've got no problem switching.
 
http://www.macworld.com/article/208...ter-weve-been-waiting-for-finally.html?page=2

I tried the Unigine Valley benchmark at 1920x1080 (since the article doesn't specify, I set everything to maximum) to see how my humble GTX 680 Mac Edition with just 2GB of VRAM would hold up against the new Mac Pro's dual D700s equipped with 6GB of VRAM each. Surprisingly enough, it scored slightly better (32.30 vs 31.70)! Now, I suppose that the Valley benchmark can't take complete advantage of the power of the dual D700s and that the D700s are designed for another type of graphic work, but I'm still quite surprised by the result, I expected the nMP to smoke the competition in about every scenario.



How fast is your Mac Pro? CPU does make a difference.
 
I think this is amazing news. If D700 is almost as fast as GTX 680, which is quite a fast card anyway, then it's all good. I expected it to be a bit more slower than GTX 680 since AMD 7970 results in Mac Pro's were lower than this.
 
Actually Premiere Pro CC does utilize dual GPUs on export & when using Adobe Media Encoder. That's huge.

See: http://blogs.adobe.com/premierepro/2013/06/adobe-premiere-pro-cc-and-gpu-support.html

The new CC update released the other day now uses both GPUs within Premiere as well…The Verge and other places doing Mac Pro reviews didn't bother to update, otherwise their results would've been a little different when comparing to FCPX.

Thanks chaps! I didn't know the latest update sorted that.

I hope other sites like Anandtech and Arstechnica will take this into consideration. So far none of the benchmarks have been in-depth enough for me. I want to see the inside of the machine, it's real guts in a tear down.
 
The new CC update released the other day now uses both GPUs within Premiere as well…The Verge and other places doing Mac Pro reviews didn't bother to update, otherwise their results would've been a little different when comparing to FCPX.

It does? That's news to me... do you have a reference?
 
Zero surprise here. As a user of Nvidia Quadro cards for years, we well know that workstation cards ≠ gaming cards. Our Quadros do only so-so in the gaming arena, yet blow away gaming cards when it comes to Maya work.
 
Have any of you guys looked at this thread:
https://forums.macrumors.com/threads/1688288/

Summary:

Crossfire is supported in windows:
http://www.youtube.com/watch?v=TdLOh8MdU20&feature=youtu.be

But not sure how that translates across to gaming as the BF4 demo above looks like it was using one card based on comparative results against a Titan.

Surprisingly the 3DMark test using both D700 cards beat a Titan gaming rig.
http://www.3dmark.com/3dm/2038247

For a workstation card to beat a top end gaming card in a DirectX 11 test is something special I think. This is with drivers optimised for pro work (floating point computation etc)

Then again, real world game tests vs benchmarks can be chalk and cheese, look at COD Ghosts as an example, it is a seriously bad port of a console game and thus not optimised for high end PC systems, especially SLI/Crossfire.

Anim
 
Zero surprise here. As a user of Nvidia Quadro cards for years, we well know that workstation cards ≠ gaming cards. Our Quadros do only so-so in the gaming arena, yet blow away gaming cards when it comes to Maya work.

I think the Titan has proved to be one of the best gaming cards in recent years. And the Titan is a Quadro 6000 that did not make the cut . They can more than hold their own, it's just that gamers would be crazy to spend that much money.

----------

Have any of you guys looked at this thread:
https://forums.macrumors.com/threads/1688288/

Summary:

Crossfire is supported in windows:
http://www.youtube.com/watch?v=TdLOh8MdU20&feature=youtu.be

But not sure how that translates across to gaming as the BF4 demo above looks like it was using one card based on comparative results against a Titan.

Surprisingly the 3DMark test using both D700 cards beat a Titan gaming rig.
http://www.3dmark.com/3dm/2038247

For a workstation card to beat a top end gaming card in a DirectX 11 test is something special I think. This is with drivers optimised for pro work (floating point computation etc)

Then again, real world game tests vs benchmarks can be chalk and cheese, look at COD Ghosts as an example, it is a seriously bad port of a console game and thus not optimised for high end PC systems, especially SLI/Crossfire.

Anim

Not at all. The Titan is based on a workstation card also. The crossfire config has an advantage of 2 cores v one. It's just shows how good the Titan is, but it as I said before , it's roots are workstation.
 
I think the Titan has proved to be one of the best gaming cards in recent years. And the Titan is a Quadro 6000 that did not make the cut . They can more than hold their own, it's just that gamers would be crazy to spend that much money.

Especially crazy now (for months) that AMD have released the r9 290(x) cards that are like 1/3 of the cost and equal performance :)
 
For a workstation card to beat a top end gaming card in a DirectX 11 test is something special I think. This is with drivers optimised for pro work (floating point computation etc)

Then again, real world game tests vs benchmarks can be chalk and cheese, look at COD Ghosts as an example, it is a seriously bad port of a console game and thus not optimised for high end PC systems, especially SLI/Crossfire.

Anim

Not really all that special. The W9000 (975mhz) behaves almost identically to the 7970GE (1000mhz) in gaming (+/- 1%). The D700 is basically the same card, just significantly underclocked (650-850mhz) and binned to save power.

I think when all is said and done, the D700 will provide a decent gaming performance. How well will depend on the actual ability to keep the boost clock up (base clock is a meager 650mhz) when both GPUs are running full-boar. Apple really has a lot riding on this "thermal core," if it can't keep up, performance will really suffer. There are also apparently problems with running Crossfire over PCIe 3.0, so that'll need to be investigated if that's how they're doing it.

In short, we need thorough testing and benchmarks, but the performance itself isn't impressive.

The thing that's really "special" about this is that they included Crossfire at all--a feature I really wasn't expecting. If over PCIe 3.0, it means a deviation from the standard Tahiti cards on the market, if they included Xfire over the proprietary card pinout, that shows someone at Apple really wanted to throw gamers a bone.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.