Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
First, if we are talking Fury X: Max power for the GPU is rated at 375W. Nominal TDP with stock cklock is 275W and we can expect the power draw to be less.

Now take this. Fury Nano is rumored to be 4096 GCN core GPU with around 7.8 TFLOPs of compute power at 175W TDP, which makes it run at around 950 MHz.
Making it running at 850 MHz would make it able to get to current Mac Pro design. At a 125W of TDP. And into Retina iMac.

AidenShaw, I really think you should ask people more educated than your sources, because AMD will survive not only next 12 months but couple more. The last bits about splitting AMD into two companys were spilled by, well... Nvidia, which is scared of next year node and their APUs.

Imagine this. AMD gives an APU performing like Intel Haswell core i5 with 2048 GCN cores and 2 GB of HBM, and possibility of adding DDR4. THATS what Nvidia is scared. And that is not fantasy. Yep. Haswell i5 coupled with Radeon R9 280X at a price of... 150$? Make it even 200$ and you have bargain of the century. That is reality of the next year.

AMD APU aren't able to keep up with Intel. And with Intel iGPU getting better and better they won't even be able to compete on the cheap end front soon.

And where do you get those silly price? The R9 280x by itself is a $250 card and AMD is in no position financialy to sell at a loss. Their market share decreased in the last 5 years with to just about 25% now. And I'm not hating AMD. I've been using their product since the mid 90's. But sometime you have to wake up and smell the coffee.

The simple fact that they are still pushing their old architecture with a second rebrand means that they're not even sure their gamble (HBM/Fury) will pay off. Only time will tell if they'll be competitive.
 
The last AMD CPU I purchased was a Socket 939 4200+, my very first build a Duron but I digress. The 4200 I believe was one of the last competitive chips AMD had that matched Intel. What makes you think this will all magically turn around next year. They havent been competitive on the CPU front for 7-8 years in my opinion. I think they should stick to discrete GPUs. This is where their future lies. To put it bluntly APU = HTPC/Laptop and thats about it - lets not kid ourselves.
I think AMD saw the writing on the wall themselves when they decided to acquire ATI a few years back.

AMD is working on new processor architecture that will be out in '16 to replace bulldozer et.al.
 
AMD APU aren't able to keep up with Intel. And with Intel iGPU getting better and better they won't even be able to compete on the cheap end front soon.

And where do you get those silly price? The R9 280x by itself is a $250 card and AMD is in no position financialy to sell at a loss. Their market share decreased in the last 5 years with to just about 25% now. And I'm not hating AMD. I've been using their product since the mid 90's. But sometime you have to wake up and smell the coffee.

The simple fact that they are still pushing their old architecture with a second rebrand means that they're not even sure their gamble (HBM/Fury) will pay off. Only time will tell if they'll be competitive.
In next year you will have completely revised Zen architecture with wide cores, made on 14 nm. It will be really wide architecture where quad core APU will have performance of Intel Haswell Core i5. Also in that APU there will be 2048 GCN core GPU with 2 GB of HBM and bandwith of 256 GB/s shared between CPU and the GPU. Add to that: every gaming API right now has Mantle in its base, and AMD has terrific performance of OpenCL.

Are you sure you know what are you talking about? Because I don't think so.
 
In next year you will have completely revised Zen architecture with wide cores, made on 14 nm. It will be really wide architecture where quad core APU will have performance of Intel Haswell Core i5. Also in that APU there will be 2048 GCN core GPU with 2 GB of HBM and bandwith of 256 GB/s shared between CPU and the GPU. Add to that: every gaming API right now has Mantle in its base, and AMD has terrific performance of OpenCL.

Are you sure you know what are you talking about? Because I don't think so.

Mantle is dead http://www.pcworld.com/article/2891672/amds-mantle-10-is-dead-long-live-directx.html

And you're saying that in 2016 their new Zen arch will have the power of a 2 year old mid-range CPU from intel with a 3 years old GPU with half the ram of a current low end model today... Yea, not buying it either...
 
I guess, you still have no idea what you are talking about. But that is ok.

Haswell, maybe 2 years old, but still is the go-to platform right now, because there is not anything else available from Intel. Apart from 3 Broadwell CPUs. Secondly, 2 GB of RAM is not system memory. For that there will be DDR4. Those 2 GB of HBM will be eDRAM for CPU and GPU, same as Intel, but will be 16 times more, and 2 times faster. The GPU with whole package made on 14 nm, will have 2048 GCN cores. Same as current D700 from Mac Pro, same as R9 295X from 5K iMac. With that bandwith it will be enough to even play at 1440p.

Lets sum up then. 220$ for R9 280X and around 180-190$ you have to pay for Haswell Core i5. Around 400$. AMD proposition will be much less in terms of prices. Much less in footprint, much less in terms of TDP. And we still dont know how HBM memory will affect the performance of Haswell-ish style architecture of AMD Zen. It can make it even faster.

You can see why Nvidia is scared to bits. 14 nm process will bring huge leap in technology.

The only reason why people think AMD is completely incompetent is because its brand appeal. I really, really hope that engineering will mean more than pure marketing. Because in fact with Nvidia there is way more marketing than real engineering leaps.

Lets end this complete off-top.

One more thing. Mantle is not dead. Its the base of every modern API: Direct X, Vulkan, Metal. Also Mantle is the name of experimental laboratory inside AMD where they test APIs for client swith custom needs. And where they are working on next versions and additions to Mantle.
 
Last edited:
I guess, you still have no idea what you are talking about. But that is ok.

Haswell, maybe 2 years old, but still is the go-to platform right now, because there is not anything else available from Intel. Apart from 3 Broadwell CPUs. Secondly, 2 GB of RAM is not system memory. For that there will be DDR4. Those 2 GB of HBM will be eDRAM for CPU and GPU, same as Intel, but will be 20 times more, and 2 times faster. The GPU with whole package made on 14 nm, will have 2048 GCN cores. Same as current D700 from Mac Pro, same as R9 295X from 5K iMac. With that bandwith it will be enough to even play at 1440p.

Lets sum up then. 220$ for R9 280X and around 180-190$ you have to pay for Haswell Core i5. Around 400$. AMD proposition will be much less in terms of prices. Much less in footprint, much less in terms of TDP. And we still dont know how HBM memory will affect the performance of Haswell-ish style architecture of AMD Zen. It can make it even faster.

You can see why Nvidia is scared to bits. 14 nm process will bring huge leap in technology.

The only reason why people think AMD is completely incompetent is because its brand appeal. I really, really hope that engineering will mean more than pure marketing. Because in fact with Nvidia there is way more marketing than real engineering leaps.

Lets end this complete off-top.

One more thing. Mantle is not dead. Its the base of every modern API: Direct X, Vulkan, Metal. Also Mantle is the name of experimental laboratory inside AMD where they test APIs for client swith custom needs. And where they are working on next versions and additions to Mantle.

Here comes the fanboy rant...

There isn't any part of Mantle in directx. Vulkan IS Mantle or more precisely a fork of it. Here, read what is Mantle before you continue digging... https://en.wikipedia.org/wiki/Mantle_(API) You are blowing smoke now. Mantle was an API pushed by AMD but adopted by no one else. It was a solution in search of a problem. Did you honestly believe that AMD is in any position to impose anything on anyone else?

AMD is down to a 25% market share. NVidia isn't the least afraid because contrary to AMD they do have product worth buying if you're into high power GPUs. Their APU/CPU line are on life support. Beside cheap Walmart PCs no one is using them. They've not been relevant in that market for years.
 
So explain to me. Why the functional, low-level code for Direct X 12 is different from Mantle code only by 3 first letters?

Explain to me, why Documentation for programming guide of DirectX is slightly dfifferentianted copy of documentation of Mantle programming guide?.

I will not say about this anything more. Lets end it here and now.

R. Huddy said:
While Huddy didn't say how closely OpenGL Next might mirror Mantle, he repeated the contention that Mantle shaped DirectX 12's development. We expressed some doubts about that contention when we addressed it earlier this year, but Huddy was adamant. Development on DirectX 12's new features may have begun before Mantle, he said, but the "real impetus" for DX12's high-throughput layer came from the AMD API.

Edit: One more thing. DirectX 12 and 11.3 are the same apart from... the low-level part of DirectX 12.

Direct X 12, Metal, and Vulkan allow the use of integrated and discreet GPUs simultaneously. All 3 have asynch. shaders.

Lets end it here now. Because the number of arguments I can give is much higher than only those.
 
Last edited:
So explain to me. Why the functional, low-level code for Direct X 12 is different from Mantle code only by 3 first letters?

Explain to me, why Documentation for programming guide of DirectX is slightly dfifferentianted copy of documentation of Mantle programming guide?.

I will not say about this anything more. Lets end it here and now.



Edit: One more thing. DirectX 12 and 11.3 are the same apart from... the low-level part of DirectX 12.

Direct X 12, Metal, and Vulkan allow the use of integrated and discreet GPUs simultaneously. All 3 have asynch. shaders.

Lets end it here now. Because the number of arguments I can give is much higher than only those.

And where did you read the low level code for directx 12 since it's closed source? And which part of Directx since it does cover a wide range of functions. As Mantle was only a helper Api and not a full blown replacement api, yeah, to code something with Mantle on windows you pretty much have to code for DirectX also hence why the docs can be similar.

They're not the same thing. They're competing product. And those arguments of yours went out the window when AMD themselves canned the product as noted in the article I linked previously. And here another one making the distinction: http://www.pcworld.com/article/2109...-pc-gamings-software-supercharged-future.html
 
I know that they are not the same. You completely don't understand what Im saying. Im saying that the FUNCTIONAL base of DirectX is Mantle. Everything else, whole library is built on top of it. The same example of it is Metal.

Its exaclty what Rich Huddy, one of Directors of AMD said. In plain english. I never stated, that DirectX is Mantle, but Mantle is just low-level base of DirectX. Or at least the way the DirectX 12 application talks to GPU is the same way as using Mantle, Vulkan, Metal.
 
I know that they are not the same. You completely don't understand what Im saying. Im saying that the FUNCTIONAL base of DirectX is Mantle. Everything else, whole library is built on top of it. The same example of it is Metal.

Its exaclty what Rich Huddy, one of Directors of AMD said. In plain english. I never stated, that DirectX is Mantle, but Mantle is just low-level base of DirectX. Or at least the way the DirectX 12 application talks to GPU is the same way as using Mantle, Vulkan, Metal.

You have a lot of faith in ATI.

I don't believe in faith-based computing - Nvidia and Intel are delivering high performance power-efficient designs today.

I believe that the delay in updating the MP6,1 is due to Apple jumping from the sinking ATI ship and moving to Maxwell GPUs in the new Tube.
 
  • Like
Reactions: tuxon86
I know that they are not the same. You completely don't understand what Im saying. Im saying that the FUNCTIONAL base of DirectX is Mantle. Everything else, whole library is built on top of it. The same example of it is Metal.

Its exaclty what Rich Huddy, one of Directors of AMD said. In plain english. I never stated, that DirectX is Mantle, but Mantle is just low-level base of DirectX. Or at least the way the DirectX 12 application talks to GPU is the same way as using Mantle, Vulkan, Metal.

No it isn't. Stop saying bloody nonsense. Directx doesn't use mantle at all. Microsoft wrote from scratch the same functionality that mantle has, but it isn't Mantle. And again, mantle is dead. AMD canned it.

Huddy can spout any ******** he wants, but I'll take Microsoft words over his when it comes to Directx.

Again go read what Mantle was and read up the two link I've posted.
 
It is a dead end if there are never any future improvement in components, but that isn't going to happen.

They could keep the same basic design and increase the diameter and height from 6.6" d , 9.9" h to 7.6" d , 10.9" h and not give up on the approach.

You are confirming that the current Mac Tube is a "dead end design" if you are suggesting a redesign....
 
You are confirming that the current Mac Tube is a "dead end design" if you are suggesting a redesign....

Just to chime in with my two cents.

1. The fact we didn't see a Mac Pro announcement at WWDC is pretty good evidence there will not be a speed bump increase in the new mac pro. The chipsets are out now and this could have been implemented by now if Apple wanted to go the speed bump route. The cynic in me might say Apple has excessive stock of the current model as an alternative and wants to clear house before constructing updated models.

2. Given they have indicated (as above) or perhaps rather, insinuated, that a speed bump won't be coming then the next logical conclusion is that Apple is waiting for TB3 devices and implementation before the new Mac Pro is revealed.

3. Given the r&D spent on the new trash can we won't see a re-design. We will likely see a new model of the same design that offers TB3 - amongst the usual card, ram speed increases.

4. When? Probably not before 1st quarter next year would be my best bet.
 
Just to chime in with my two cents.

1. The fact we didn't see a Mac Pro announcement at WWDC is pretty good evidence there will not be a speed bump increase in the new mac pro. The chipsets are out now and this could have been implemented by now if Apple wanted to go the speed bump route. The cynic in me might say Apple has excessive stock of the current model as an alternative and wants to clear house before constructing updated models.

2. Given they have indicated (as above) or perhaps rather, insinuated, that a speed bump won't be coming then the next logical conclusion is that Apple is waiting for TB3 devices and implementation before the new Mac Pro is revealed.

3. Given the r&D spent on the new trash can we won't see a re-design. We will likely see a new model of the same design that offers TB3 - amongst the usual card, ram speed increases.

4. When? Probably not before 1st quarter next year would be my best bet.


This.
 
Thee seems to be little reason to update the nMP right now.
Haswell would be a good update, for the DDR4, AVX and all.
But Broadwell is just around the corner, die shrink, IPC improvement, speedier RAM.
My guess is they'll wait.
Also, with the current GPU lineup things are a bit at a stall. Grenada is no improvement over existing Hawaii. If only AMD had done a Super Tonga, much like Fiji but GDDR5, a dual-Tonga if you will, to replace Hawaii. At least that would warrant a new name and something truly new (ish).
But power is always the limiting factor here.
As for storage and TB3, they could use a switch (again, and even the same), this time to have 3 TB3 controllers and 1 SSD or even 2 TB3 controllers and 2 SSDs. That would be nice. I's much prefer a no switch solution but there are not enough lanes. I would even risk going only 1 TB3 controller and 1 SSD directly connected to the CPU, no swtich. The rest of the ports would be USB 3, possibly the full 6 of them.
That would cover most of the needs but as usual some people would be left out in the cold, needing some more.
Twin GbE and HDMI 2.0 as usual too.
Give it a few more months though...
 
Thee seems to be little reason to update the nMP right now.
Haswell would be a good update, for the DDR4, AVX and all.
But Broadwell is just around the corner, die shrink, IPC improvement, speedier RAM.
My guess is they'll wait.
Also, with the current GPU lineup things are a bit at a stall. Grenada is no improvement over existing Hawaii. If only AMD had done a Super Tonga, much like Fiji but GDDR5, a dual-Tonga if you will, to replace Hawaii. At least that would warrant a new name and something truly new (ish).
But power is always the limiting factor here.
As for storage and TB3, they could use a switch (again, and even the same), this time to have 3 TB3 controllers and 1 SSD or even 2 TB3 controllers and 2 SSDs. That would be nice. I's much prefer a no switch solution but there are not enough lanes. I would even risk going only 1 TB3 controller and 1 SSD directly connected to the CPU, no swtich. The rest of the ports would be USB 3, possibly the full 6 of them.
That would cover most of the needs but as usual some people would be left out in the cold, needing some more.
Twin GbE and HDMI 2.0 as usual too.
Give it a few more months though...

There is a Fury X2 in the work but you still have the problem of limited vram and how to fit the CLC system in the tube as the non liquid cool fury is a stripped down version. We don't need another castrated gpu in this machine, no?
 
Thee seems to be little reason to update the nMP right now.
Haswell would be a good update, for the DDR4, AVX and all.
But Broadwell is just around the corner, die shrink, IPC improvement, speedier RAM.
My guess is they'll wait.
Also, with the current GPU lineup things are a bit at a stall. Grenada is no improvement over existing Hawaii. If only AMD had done a Super Tonga, much like Fiji but GDDR5, a dual-Tonga if you will, to replace Hawaii. At least that would warrant a new name and something truly new (ish).
But power is always the limiting factor here.
As for storage and TB3, they could use a switch (again, and even the same), this time to have 3 TB3 controllers and 1 SSD or even 2 TB3 controllers and 2 SSDs. That would be nice. I's much prefer a no switch solution but there are not enough lanes. I would even risk going only 1 TB3 controller and 1 SSD directly connected to the CPU, no swtich. The rest of the ports would be USB 3, possibly the full 6 of them.
That would cover most of the needs but as usual some people would be left out in the cold, needing some more.
Twin GbE and HDMI 2.0 as usual too.
Give it a few more months though...

One thing I haven't seen addressed is what AMDs plans are for the FirePro version of Fiji. Is there a plan? Can they convince users who are use to 8 GB and 16 GB of memory to drop down to 4 GB? It doesn't seem like AMD would want to skip a generation of professional GPUs.

It wouldn't surprise me to see Fiji in a Mac Pro. The Fury Nano is supposed to be a 175 W part, which is close to the 125 W - 150 W range that it needs to be in. Apple could probably spin HBM as a good thing, and use metal to achieve memory pooling across the GPUs, or just suggest that its fast enough to swap with system memory.

There is a Fury X2 in the work but you still have the problem of limited vram and how to fit the CLC system in the tube as the non liquid cool fury is a stripped down version. We don't need another castrated gpu in this machine, no?

Fury x2 is dual GPUs on the same card, and since the Mac Pro already has dual GPUs, there is no reason to do this.
 
One thing I haven't seen addressed is what AMDs plans are for the FirePro version of Fiji. Is there a plan? Can they convince users who are use to 8 GB and 16 GB of memory to drop down to 4 GB? It doesn't seem like AMD would want to skip a generation of professional GPUs.

They may skip FirePro this generation. Nvidia has been running two major designs per generation. One "big" that leans toward computation and one "smaller" that leans toward max graphics throughput. AMD has tried to do a "best while doing both" approach. Less resources so one design to fit "most". Just moving to HBM is a huge stretch.

I doubt AMD wants to come out with Zen design in 2H 2016 either. They are because that's the hand they have been dealt.

Second the >4GB wasn't most of their line up now. Last Summer the only new one over 4GB was the

http://www.anandtech.com/show/8371/amd-firepro-w7100-w5100-w4100-w2100

They still haven't dropped the original 7000's series which means they probably won't mind having overlaps. The old W8000-W7000 didn't break the 4GB barrier either. So if dropped in as W7150 it could sit in the catalog alongside the 7100.

How much 8Hi costs extra they might be able to do a 8GB Fiji variant. The VRAM cost more than just double for the
for the increase so it may make some sense just to wait until HBM2.


Fury x2 is dual GPUs on the same card, and since the Mac Pro already has dual GPUs, there is no reason to do this.

I think the point of bringing up Fury X2 is to bust the thermal envelope of the Mac Pro more so than dual GPUs in limited space. What Mac Pro doesn't have is two GPUs presenting the virtual image of being just one "bigger" GPU. In Windows, but not in OS X.
 
Also, with the current GPU lineup things are a bit at a stall. Grenada is no improvement over existing Hawaii. If only AMD had done a Super Tonga, much like Fiji but GDDR5, a dual-Tonga if you will, to replace Hawaii. At least that would warrant a new name and something truly new (ish).

If look at the AMD videos about Fury it seems obvious the folks at AMD declared GDDR5 a 'dead end' several years ago.
If AMD had gobs of resources to fund 4-6 separate design teams to move Tongs and Fiji forward at the same time then perhaps that would be a possible move, but AMD doesn't have tons of extra cash flow.

They need to put their wood behind just a couple of arrows. A team on "super tonga" or a team on HDM2? The latter is more strategic long term. Similarly doing 12-14nm design is harder than 28nm. Again want more folks on the future critical tech at this point.



As for storage and TB3, they could use a switch (again, and even the same), this time to have 3 TB3 controllers and 1 SSD or even 2 TB3 controllers and 2 SSDs. That would be nice. I's much prefer a no switch solution but there are not enough lanes.

It is not going to help to dilute the TB3 PCIe bandwidth if trying to "blast' it with the full 40Gb/s of the TB new bus. If dilute the data feed then dilute the

PCIe switches work better when the stuff being switch isn't consume the whole feed. So the two x4 slots in the MP 2009-2012 work OK as long as put things that are pragmatically pull x2 or x1 kinds of workloads. If put two full x4 saturating loads in there it will be exposed.

TB and the SSD run up close to the limit. SSD are even more likely when NVMe.

I would even risk going only 1 TB3 controller and 1 SSD directly connected to the CPU, no swtich. The rest of the ports would be USB 3, possibly the full 6 of them.

1 TB3 controller puts the MP back in the same zone as the MBP , iMac , and Mini when it comes to TB ports. It is a net decrease in port which is a bit of a higher handicap for the MP at its significantly higher price point. Minimally they'd need to put Type C DisplayPort Alternative mode onto onto 2-4 of those USB ports to keep the dislay connectivy up. That leaves with maybe just 2 legacy Type A plug, which likely is going to cause problems for those who have to deal with license dongles and legacy USB connectivity.

There are 4 USB and 6 TB now. If switched to 6 USB and 4 TB then still would have an overall count of 10.
The total available bandwidth isn't down since the 4 TB v3 ( 2 * 40 = 80) are higher than ( 3 * 20= 60 ). Same x8 worth of PCIe v3 that is provisioning the underlying PCIe component. And don't need a switch at all ( which has some power overhead).
 
You are confirming that the current Mac Tube is a "dead end design" if you are suggesting a redesign....

An adjustment not a wholesale redesign.

Mini Original 2006 ( https://support.apple.com/kb/SP65 )
"....
    • Height: 2 inches (5.08 cm)
    • Width: 6.5 inches (16.51 cm)
    • Depth: 6.5 inches (16.51 cm)
2011 ( https://support.apple.com/kb/SP633 )

"...
  • Height: 1.4 inches (3.6 cm)
  • Width: 7.7 inches (19.7 cm)
  • Depth: 7.7 inches (19.7 cm) ..."

Same basic approach but not exactly the same. The Mini is probably about the change again. Can adapt to the components without having to throw out the basic design approach.

The iMac has jumped around even more. The MBA changed over the years.

Sure if you design a box that is purposely oversized then stick with exactly the same dimensions, but even the "fixed in stone" dimensions of the Mac G5 - Mac Pro 2012 adjusted internally in design.

There is a huge different between not-quite-optimized and a "dead end "design. Folks screaming Apple's evil "form over function" it is bit peculiar to be firmly committed to a fixed set of dimensions. A basic property of form is dimensions.
 
An adjustment not a wholesale redesign.

Mini Original 2006 ( https://support.apple.com/kb/SP65 )
"....
    • Height: 2 inches (5.08 cm)
    • Width: 6.5 inches (16.51 cm)
    • Depth: 6.5 inches (16.51 cm)
2011 ( https://support.apple.com/kb/SP633 )

"...
  • Height: 1.4 inches (3.6 cm)
  • Width: 7.7 inches (19.7 cm)
  • Depth: 7.7 inches (19.7 cm) ..."

Same basic approach but not exactly the same. The Mini is probably about the change again. Can adapt to the components without having to throw out the basic design approach.

The iMac has jumped around even more. The MBA changed over the years.

Sure if you design a box that is purposely oversized then stick with exactly the same dimensions, but even the "fixed in stone" dimensions of the Mac G5 - Mac Pro 2012 adjusted internally in design.

There is a huge different between not-quite-optimized and a "dead end "design. Folks screaming Apple's evil "form over function" it is bit peculiar to be firmly committed to a fixed set of dimensions. A basic property of form is dimensions.

MP6,1:
wright_flyer.jpg


Design "adjustment":
boeing-dreamliner-787-9-paris-990x500.jpg


If they have to re-engineer it, it's a new design even if it looks a lot like the old design. An old Titanium PowerBook G4 looks a lot like today's MacBook Pro - would you argue that they are the same design?

By the way, a year and a half ago I posted a mockup of a redesign of the 6,1 to support dual CPUs:

24.jpg


It's a crude graphics job, but I just made it taller so that a second CPU and set of DIMMs could be put above the first. I kept the proportions the same, which increased the space so that 8 or 12 DIMMs per CPU were possible. (And left plenty of room for a workstation class power supply.)

ps: If you haven't seen the "Paris rehearsal video" for the Dreamliner, check out
 
Last edited:
One thing I haven't seen addressed is what AMDs plans are for the FirePro version of Fiji. Is there a plan? Can they convince users who are use to 8 GB and 16 GB of memory to drop down to 4 GB? It doesn't seem like AMD would want to skip a generation of professional GPUs.

It wouldn't surprise me to see Fiji in a Mac Pro. The Fury Nano is supposed to be a 175 W part, which is close to the 125 W - 150 W range that it needs to be in. Apple could probably spin HBM as a good thing, and use metal to achieve memory pooling across the GPUs, or just suggest that its fast enough to swap with system memory.



Fury x2 is dual GPUs on the same card, and since the Mac Pro already has dual GPUs, there is no reason to do this.

Well, you would then have 4... ;-)
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.