Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
I think I know why OC on Fury X brings such small addition to performance, and overall the GPU is a bit, well disappointment.

..... something is really wrong here. Not with the GPU itself, but definitely with games........I have no doubt right now that there is something wrong with the games.

So, all the games are broken? Amazing.

You have uncovered a worldwide conspiracy to make Fury appear to be a power slurping space heater. All the game manufacturers conspired years ago to make sure it would come out this way.

New York Times should be contacted.
 
  • Like
Reactions: tuxon86
Are you willing to add anything constructive to this thread? Are you willing to add anything constructive to what I have written?

If not, go troll away somewhere else.
 
First you said that Fury wasn't "OC'ed all to hell"

Then you admitted it was and that lowering clocks 100% meant it ran on 9 Watts and gave 99% performance. (i.e., it was "OC'ed to hell" but didn't need to be)

Now you claim that the card is fine and all of the game coders made a mistake, except the 2 or 3 titles where it doesn't do poorly.

"all over the map".
 
  • Like
Reactions: tuxon86
Looks like it is not OCed to hell if not all the cores are used. Fury and Fury X should be much faster. OC gains should be higher than are. OC gains should be higher because the GPU is much wider than Nvidia. But they are not. Why? Because of idling from some part of the GPU. Because games are not using most of the cores. 100 MHz on core bring 0.8 TFLOPs more performance. 5% higher clock on R9 390X brings 10 FPS in the same game with the same drivers compared to R9 290X. How much brings OC on Fury X? Oc'ed by 10%? Almost nothing. And what is funnier, exactly that explains why 3dMark benchmarks get such increase in performance when OCed. Because they are using all of the cores of Fury X and Fury.

That is the problem here.

And no I did not admitted anything. From the start I was saying that it can be down clocked and undervolted to much lower thermal envelope.

Edit. I've turned to that Techpowerup article about OC on Fury X.

Which still is backing up my theory. Nominal clocks: 1050/500 - 51 FPS.
1215MHz/560 - 56 FPS.
The gains are too small in comparison to other GPUs with OC. Even from AMD themselves.
http://tpucdn.com/reviews/AMD/R9_Fury_X/images/bf3_3840_2160.gif
Look at difference between R9 290X and 390X. 5% higher core clock on 390X translates to 5 FPS. How much more core clock you need to get that 5 FPS on Fury X? 165 MHz which is 15% more, and 12% higher memory clock. That is because most of the cores are completely idle, or bottlenecked by games.
 
Last edited:
koyoot, you are my Fox Mulder and...

41II%2Bw6kcwL._SY300_.jpg


It gives me hope about an awesome Mac Pro in the future.

Keep it up and don't let the naysayers dissuade you! :)
 
Which of these possible Mac Pro scenarios do we think are most likely?


Scenario #1 - Haswell

Tagline: "Fury Roadmap"

Availability: announced August-September 2015 along with new iMacs, shipping October-December 2015

Processors: Xeon Haswell-EP v3 configurable up to 18-core [eg, Xeon E5-2699v3 2.3 18c]
Memory: 2133MHz DDR4 memory: configurable to 16GB, 32GB, 64GB, 128GB
Storage: PCIe-based 2GB/sec flash storage: configurable to 512GB, 1TB, 2TB
Graphics: Dual AMD Fury/nano tech in good/better/best adaptations
Interfaces: 6x Thunderbolt 2 (DisplayPort), 4x USB 3.1 (reversible plug), 2x legacy USB 3.0 (flat plug), HDMI
Other: no case change, "Mac Pro 6,2" designation, base price $3000 rapidly scaling up according to CTO options

Apple Retina 5K Display announced at same time, similar specs to Dell UP2715K Ultra HD 5K Monitor, compatible with existing and updated 27" iMacs, new retina iMac 21.5", existing Mac Pro (2013), new Mac Pro (2015), and high-spec'd MacBook Pros.


Scenario #2 - Broadwell

Tagline: "Can't update our specs, my ass!"

Availability: announced January-February 2016, shipping March-April 2016 (end Q1 when Intel ships Broadwell-EP)

Processors: Xeon Broadwell-EP v4 processor configurable to whatever relatively high GHz core counts are on offer
Memory: 2400MHz DDR4 memory: configurable to 16GB, 32GB, 64GB, 128GB
Storage: PCIe-based 2GB/sec flash storage: configurable to 512GB, 1TB, 2TB
Graphics: Dual AMD Fury/nano tech in good/better/best adaptations
Interfaces: 6x Thunderbolt 3 / USB 3.1 (reversible), 2x legacy USB 3.0 ports (flat), HDMI, DisplayPort->TB 3 adapter in box
Other: possible slight case size increase to accomodate larger internal heatsink, "Mac Pro 7,1" designation


Scenario #3 - Skylake

Tagline: "Good things come to those who wait ... a long, long time"

Availability: announced mid 2017 at WWDC, shipping next few months if Intel has ramped up chip production

Processors: Xeon Skylake-EP v5 processors configurable up to 28-core
Memory: 2400MHz DDR4 memory configurable to 16GB, 64GB, 128GB, 256GB
Storage: PCIe-based 2+ GB/sec flash storage: configurable to 1TB, 2TB, 4TB
Graphics: Dual AMD future chip tech in good/better/best adaptations
Interfaces: Thunderbolt 3 natch and whatever the industry still considers useful at that point


My money is on #2, considering they've already skipped over Haswell-EP and would only do #1 if it was AMD Fury tech they were waiting for. #3 is just too long to wait, even for Apple's traditionally tortuous Mac Pro upgrade cycle.
 
  • Like
Reactions: rdav and AdamSeen
Brad, made me chuckle :D.

I think nr 1 is the most probable, at this stage. P.S. There is a rumor that Broadwell - EP CPUs are canceled. Thats why I don't believe we will have to wait longer for the new computer.
 
Having the GPU radiating that much heat mere millimeters from that 5K panel is BEGGING for a yellow/brown cast over the GPU area in the future.

https://discussions.apple.com/thread/6641477

Apple has moved to a complete line of 2nd rate machines due to this addiction to AMD's Bargain Bin GPUs. They are cheaper for a good reason.

Good grief... after replacing the thermal compound with Gelid GC Extreme, my Titan X idles at 35˚C.
And these guys' 5k iMacs are idling at 60˚C. Mine peaks at 85˚C on the worst torture test Furmark has to offer and theirs goes over 90˚C when doing simple things like watching Youtube vids.

Fancy cases can't overcome the laws of physics.
 
Neither of these comments are really relative, now that Nvidia has competitive OpenCL performance.
that's great and all but misses the point.. reverse it ->
how well does Cuda run on AMD ?

then take that further.. why doesn't Cuda run on AMD?
if you answer that question then the point will be discovered ; )
 
So, all the games are broken? Amazing.

You have uncovered a worldwide conspiracy to make Fury appear to be a power slurping space heater. All the game manufacturers conspired years ago to make sure it would come out this way.

A similar thing happened a few years ago when Nvidia went from Fermi to Kepler in their high-end Tesla GPGPU products. The Kepler was much faster on paper, but had 3x-4x as many thread processors. Many applications as written could not make use of the increased parallelism, and those extra processors sat idle. The applications had to be either rewritten, or used at much larger problem sizes to make use of the available processors. I have no idea if this is a possible explanation for gaming performance on Fury, but I wouldn't be surprised.
 
Last edited:
Many applications as written could not make use of the increased parallelism, and those extra processors sat idle. The applications had to be either rewritten, or used at much larger problem sizes to make use of the available processors.
veering some but i think they were just using easy(er)-to-hook-up methods in cuda.. (gpu assistance).. the gpus were being held back by the cpu.. or- the cpu needed to continually do some necessary things in order for the gpu to have data to process..
the gpu could calculate the data very quickly.. faster (depending on how you look at it) than the cpu could as well as take the load off the cpu..

so the cpu has more breathing room since calculations are happening elsewhere for it.. but, the cpu would still bog down.. it couldn't feed the gpus data quickly enough for their resources to be fully utilized.. a mid grade gpu would preform the same as high end stuff since under this implementation, you couldn't push the mid card to 100%.. so really no use for a high end card of 400%

--
edit.
oh.. not trying to say anything against what you said.. i agree with you. just when i read your post, it threw me out on the above tangent.
 
Last edited:
... Look at difference between R9 290X and 390X. 5% higher core clock on 390X translates to 5 FPS. How much more core clock you need to get that 5 FPS on Fury X? 165 MHz which is 15% more, and 12% higher memory clock. That is because most of the cores are completely idle, or bottlenecked by games.

Please keep in mind that GPUs - as well as CPUs - have a thermal management which tries to keep the chip from overheating. Thermal management can be realized e.g. by reducing the frequency or by deactivating parts of the chip. It's no trivial feat to spot these effects in a benchmark. Basically you'd need a plot of the frequency, active core count and FPS over time. That's something most of the benchmarks are lacking because doing such a benchmark properly is a lot of work.

So if you post links about average benchmark numbers (either for team red or team green), these will never tell the truth. And you will never be able to gain any meaningful insight from these benchmarks.
 
Tomvos, you forget that it is not only related to OC. Even stock Fury X underscores itself in gaming even in 1080p, as I have shown. Is it due to GPU? Simply - no. Because we have to look at a game where there is no CPU bottleneck, and GPU is fully utilized. Ryse from computerbase.de benchmarks is a good example. Not only with proper drivers it gets 20% boost in framerate, but also now is faster than GTX 980 Ti in 1080p. That is simply because its fully utilized. Also its worth noting that somehow 3dMark does not have a problem with that.

http://www.techpowerup.com/forums/threads/overclocked-hbm-its-true-and-its-fast.213875/
I don't think the thermal constraints are a problem here.
 
Tomvos, you forget that it is not only related to OC. Even stock Fury X underscores itself in gaming even in 1080p, as I have shown. Is it due to GPU? Simply - no. Because we have to look at a game where there is no CPU bottleneck, and GPU is fully utilized. Ryse from computerbase.de benchmarks is a good example. Not only with proper drivers it gets 20% boost in framerate, but also now is faster than GTX 980 Ti in 1080p. That is simply because its fully utilized. Also its worth noting that somehow 3dMark does not have a problem with that.

http://www.techpowerup.com/forums/threads/overclocked-hbm-its-true-and-its-fast.213875/
I don't think the thermal constraints are a problem here.


You (who has no Fury) knows better then all of the review sites.

And the game coders all wrote bad code that made this low power consumption wonder look bad.

Who do you think shot Kennedy?
 
  • Like
Reactions: tuxon86
I will ask this again MVC. Are you willing to give anything that contradicts my data gathered on this topic? Or you will only troll? If you have anything that contradicts, any proofs, any data. I will be fine to read it. So far you have not provided ANYTHING constructive to this thread, on this topic.

P.S. It is not only my opinion, but opinion people that have Fury X, that I gave in one of the posts in this thread.
http://semiaccurate.com/forums/showpost.php?p=242097&postcount=1106
 
Which of these possible Mac Pro scenarios do we think are most likely?

Scenario #2 - Broadwell

Tagline: "Can't update our specs, my ass!"

Availability: announced January-February 2016, shipping March-April 2016 (end Q1 when Intel ships Broadwell-EP)

Processors: Xeon Broadwell-EP v4 processor configurable to whatever relatively high GHz core counts are on offer
Memory: 2400MHz DDR4 memory: configurable to 16GB, 32GB, 64GB, 128GB
Storage: PCIe-based 2GB/sec flash storage: configurable to 512GB, 1TB, 2TB
Graphics: Dual AMD Fury/nano tech in good/better/best adaptations
Interfaces: 6x Thunderbolt 3 / USB 3.1 (reversible), 2x legacy USB 3.0 ports (flat), HDMI, DisplayPort->TB 3 adapter in box
Other: possible slight case size increase to accomodate larger internal heatsink, "Mac Pro 7,1" designation

My money is on #2, considering they've already skipped over Haswell-EP and would only do #1 if it was AMD Fury tech they were waiting for. #3 is just too long to wait, even for Apple's traditionally tortuous Mac Pro upgrade cycle.

I think you are right. If thunderbolt 3 wasn't in the equation at all, they'd definitely go for this. The thunderbolt 3 and 5k monitors that make it a hard call, though. I would expect it to be a pain to do Alpine Ridge for Broadwell and the PCI lane bandwidth isn't there for broadwell.
 
  • Like
Reactions: rdav
Also I don't think that the current design of nMP is here to accommodate :

because if it is so, it has failed most of them (GPUs, memory capacity). Maybe the under discussion next iteration of it will make it better.

Bandwidth for storage is one thing the Mac Pro has in abundance. You aren't supposed to stick disks in it, you're supposed to hook it up to a SAN via 10 gig ethernet or fibre channel using those thunderbolt ports - where you can have hundreds of spindles serving up hundreds of thousands of iOPS on petabytes of storage (off a NetApp, EMC, or whatever) - and use the internal SSD as a scratch disk.

"oh noes i lost 3 drive bays!!" = totally irrelevant if you're serious about hooking up a lot of high speed storage. If you're relying on internal drive bays, you're small fry and literally should think outside the box.

Yes it's a bit short on memory capacity but the PCIe flash will help a little with that for swap.

GPUs? Order appropriate spec when you buy, after 3 years the machine is out of warranty - sell it off and order a new one with new GPUs (and higher bandwidth memory, higher bandwidth thunderbolt, new CPU socket/chipset, etc.)?
 
Last edited:
Which of these possible Mac Pro scenarios do we think are most likely?


Scenario #1 - Haswell

Tagline: "Fury Roadmap"

Availability: announced August-September 2015 along with new iMacs, shipping October-December 2015

Processors: Xeon Haswell-EP v3 configurable up to 18-core [eg, Xeon E5-2699v3 2.3 18c]
Memory: 2133MHz DDR4 memory: configurable to 16GB, 32GB, 64GB, 128GB
Storage: PCIe-based 2GB/sec flash storage: configurable to 512GB, 1TB, 2TB
Graphics: Dual AMD Fury/nano tech in good/better/best adaptations
Interfaces: 6x Thunderbolt 2 (DisplayPort), 4x USB 3.1 (reversible plug), 2x legacy USB 3.0 (flat plug), HDMI
Other: no case change, "Mac Pro 6,2" designation, base price $3000 rapidly scaling up according to CTO options

Apple Retina 5K Display announced at same time, similar specs to Dell UP2715K Ultra HD 5K Monitor, compatible with existing and updated 27" iMacs, new retina iMac 21.5", existing Mac Pro (2013), new Mac Pro (2015), and high-spec'd MacBook Pros.


Scenario #2 - Broadwell

Tagline: "Can't update our specs, my ass!"

Availability: announced January-February 2016, shipping March-April 2016 (end Q1 when Intel ships Broadwell-EP)

Processors: Xeon Broadwell-EP v4 processor configurable to whatever relatively high GHz core counts are on offer
Memory: 2400MHz DDR4 memory: configurable to 16GB, 32GB, 64GB, 128GB
Storage: PCIe-based 2GB/sec flash storage: configurable to 512GB, 1TB, 2TB
Graphics: Dual AMD Fury/nano tech in good/better/best adaptations
Interfaces: 6x Thunderbolt 3 / USB 3.1 (reversible), 2x legacy USB 3.0 ports (flat), HDMI, DisplayPort->TB 3 adapter in box
Other: possible slight case size increase to accomodate larger internal heatsink, "Mac Pro 7,1" designation


Scenario #3 - Skylake

Tagline: "Good things come to those who wait ... a long, long time"

Availability: announced mid 2017 at WWDC, shipping next few months if Intel has ramped up chip production

Processors: Xeon Skylake-EP v5 processors configurable up to 28-core
Memory: 2400MHz DDR4 memory configurable to 16GB, 64GB, 128GB, 256GB
Storage: PCIe-based 2+ GB/sec flash storage: configurable to 1TB, 2TB, 4TB
Graphics: Dual AMD future chip tech in good/better/best adaptations
Interfaces: Thunderbolt 3 natch and whatever the industry still considers useful at that point


My money is on #2, considering they've already skipped over Haswell-EP and would only do #1 if it was AMD Fury tech they were waiting for. #3 is just too long to wait, even for Apple's traditionally tortuous Mac Pro upgrade cycle.

Great post, and a relatively accurate summary of the potential options that are available. I would also lean towards option 2. The only other possibility I see would be option 1 with thunderbolt 3 if Apple decided they want to release it this fall with a display that requires thunderbolt 3. Of course the skylake option is always possible, but I would really hope Apple doesn't let the Mac pro go unupgraded for 3+ years...

In related news, the high end consumer desktop skylake processors came out today. Of course these are not directly applicable for the Mac Pro, as these are not the Xeon variants (Skylake-EP), but provide a glimpse into the future. IPC (instructions per clock) gains are reasonable, but nothing earth shattering. Basically, the summary from Anandtech is

Ivy Bridge (Mac Pro 2013) -> Haswell: 11% improved
Haswell -> Broadwell: 3% improved
Broadwell -> Skylake: 2-3% improved

The improvements are relatively modest, and decreased compared to processor generations in the past. The one part that is missed in these numbers is there are some nice improvements to media encoding and scientific computing, which is something Apple cares about.

The other important piece of the performance gains are tied to clock speed increases between generations. Ivy Bridge-EP to Haswell-EP has the same clock speeds, but improved IPC. It remains to be seen if Broadwell-EP will be clocked any higher than Haswell-EP. It is possible as it is moving to a smaller transistor node, and this traditionally means faster clocks for a given TDP. However, the consumer parts for Broadwell and Skylake at 14nm have seen clocks stay basically the same, and comparing the intel 4790k (Haswell) to the 6700k (Skylake) the maximum boost clock of the processor has decreased from 4.4 Ghz to 4.2 Ghz while TDP has increased from 88W to 91W.

The take away from this is that the generational improvements from Intel seem to be slowing down for high end desktop and server performance. The focus on mobile parts has brought increased efficiency, but at the cost of top end performance. Most of the justification for purchasing Skylake on the consumer side seems to be based on improvements to the platform like DDR4 and improved bandwidth and support for better SSD drives. Obviously these are the things that Apple cares about, but is probably unwilling to wait for Skylake-EP to be released in 2017. Otherwise, the CPU performance increases alone from Haswell to Broadwell and Broadwell to Skylake do not justify Apple waiting around for these chips from Intel. There is not much motivation for Apple to even wait for Broadwell-EP, especially since the platform is the same as what is currently available for Haswell-EP. For this reason, Apple is most likely basing the upgrade cycle around either thunderbolt 3 or GPUs.

Maybe I am too much of an Apple homer, but this seems to justify the 1 CPU + 2 GPU approach Apple has taken. Single threaded performance has stalled. Certainly Intel (and Apple) have known this was coming for awhile. However, on the GPU side, the increase in performance is likely to continue as GPUs shrink from 28nm to 14/16nm next year. This is aided by the fact that GPU performance can increase simply by packing in more transistors, as we have seen on GPUs at 28nm. Now Apple just needs to catch up on the software side and start utilizing the GPUs more.

(Edit: Changed -E to -EP, thanks ManuelGomes)
 
Last edited:
Can I just ask something? Can we drop the Fury discussion? MVC is keen on bugging koyoot about Fury being or not a good card. Maybe it's not prone to being flashed with a new firmware!
Leave it be for now, let's wait for news on drivers optimization or better OS X support.
I'd say right now Apple is really waiting on TB3, surely not Haswell-EP. New GPUs are out, maybe this was the reason and they're just optimizing.
Broadwell-EP is a mistery, but I believe it will come. However, on the desktop it was not exactly a full lineup, so I don't really know.
Waiting for Skylake-EP doesn't seem reasonable, even for Apple and MP. But that would be the TB3 though.

Stacc, Xeons are -EP, not -E. -E should come way before -EP variants. You are correct, improvements are quite lean now, and I'd bet most of us wouldn't even notice in everyday use.
What you can perceive is if the SSD speed goes up, which it did recently, and if you get GPU tune up in the OS.
The only reason to be excited about with new CPUs may be the higher core count, for those who really require them, or just plainly want to have as many as they can, even if it's just to moan about nMP not being dual CPU :)

I will not go into the GPU discussion again, maybe Fiji, maybe Grenada. Apple would favor compute/DP and that's Grenada, but it's older tech. Gaming could be Fiji and it's newer, but gaming is not Apples' business. So, take your pick.

I'm also not sure Apple will go 128GB as mem max. Right now nMP supports it but Apple will not allow you to BTO it, you need to do it yourself. Maybe with DDR4...
Also, don't count on 2TB SSD, we're not there yet.
 
I will get back one last time to gaming performance of Fury X, and what compute means for gaming.
Read this post:
http://semiaccurate.com/forums/showpost.php?p=242123&postcount=1107

About that TB3. I think the problem is that it will consume a lot of PCIe lanes from CPU. We have dual GPUs that use 16 lanes each, that goes for 32, and the CPU has 40. And we have USB, SSD etc, etc. Im having hard time imagining all of this connected to CPU. The solution is to get the GPUs on PCIe x8. But is it really a solution? :/
 
Did you read, that Tomshardware reduced the TDP by 50% and the power consumption went from 267W to 170W? Did you read that link at all? Did you see that at 170W it maintained 95% of nominal performance?

I'm going to give you the benefit of the doubt that you just don't understand the actual words. Once again you have linked to an article that DISPROVES what you are saying.

Here is what the article actually says:

"What would happen if the power limit (not to be confused with Nvidia’s power target) was lowered manually? Would it be possible to trade a small and expected performance hit for significantly lower power consumption akin to what we see at Full HD? Unfortunately, no."

They found EXACTLY THE OPPOSITE of what koyoot is claiming.

Here is the second half:

"Even though the power consumption decreases from 267W to 170W when the power limit is set to -50 percent, the resulting frame rates just aren’t in the playable range any more."

They never said it kept 95% of performance, they said it didn't work.

And that is my issue with koyoot and his posts. He keeps making wild claims with links. But when you follow the links, the articles REFUTE what he has claimed. He is either deliberately trying to mislead people or just doesn't understand the articles. Read it for yourselves, the article says the exact opposite of what he has been claiming it says.

http://www.tomshardware.com/reviews/amd-radeon-r9-fury-x-power-pump-efficiency,4215.html

I'm happy to walk away, I'm just tired of wild claims that aren't even remotely backed up by his links.
 
  • Like
Reactions: tuxon86
http://media.bestofmicro.com/R/X/509037/original/21-Power-Limit.png

And this is exact screen they made to that fragment. Average FPS drops by 3 FPS. You did not show this screen directly in your post, why?
And this exactly what has been discussed on Anandtech forums, by people more competent than MVC, and myself combined.
http://forums.anandtech.com/showpost.php?p=37604750&postcount=77
http://forums.anandtech.com/showpost.php?p=37605456&postcount=84
http://forums.anandtech.com/showpost.php?p=37606797&postcount=87
http://forums.anandtech.com/showpost.php?p=37606878&postcount=90

The problem with your statments is that you take the words without even considering big scheme of things. You want the words to prove that AMD GPUs are ****, and that is an end of that. Which in reality is not even true. The minimum FPS is not even shown by any sort of timeframe here. That would be much more considerable. It can be once a time situation. Average FPS, dropped by 3 FPS as we can see.

I also don't have any backup here. But the average FPS is most important here, cause it did not came from nowhere.

Edit. I will not say anything from now on this topic. Lets get back to MP.
 
Last edited:
http://media.bestofmicro.com/R/X/509037/original/21-Power-Limit.png

And this is exact screen they made to that fragment. Average FPS drops by 3 FPS. You did not show this screen directly in your post, why?

I didn't need to post every chart because instead I quoted the conclusions reached by Tom's Hardware. They came tho the conclusion that lowering clocks to save power DIDN'T WORK. That is why I concentrated on the words.

But if you think their conclusion was wrong based on your interpretation of that chart, I'll help out. I've attached the image. As they pointed out in the article, the games became "not PLAYABLE."

There are two lines on the chart. The average fps, and the min fps. The min fps went from 22 to 12. Nearly a 50% drop.

And there is where "unplayable" comes from.

And this exactly what has been discussed on Anandtech forums, by people more competent than MVC, and myself combined.

So, since Tom's Hardware refuted your point and you got caught misrepresenting the conclusion of the test, now you want to claim some enthusiasts on a GPU forum are experts? I would be willing to bet a large sum of money that I have handled 1,000 times more GPUs then anyone else here, including the AMD fanbois you have been reduced to quoting.


The problem with your statments is that you take the words without even considering big scheme of things.

"The Words" were the conclusion of Tom's Hardware, from an article that you linked to. They proved you wrong, and you misrepresented the results. I am asking you politely to stop. I also think you owe everyone an apology. You claim that article came to the conclusion that Fury could be down clocked, keep it's performance and lower power hunger. The END RESULT that Tom's Hardware found was "No Unfortunately". Those aren't just words, that is their conclusion. Coming on here and pretending that they came to the opposite conclusion is an attempt to deceive people. If you stop making false claims and wildly inaccurate misrepresentations of tech articles, I promise to stop correcting you.
 

Attachments

  • Screen Shot 2015-08-05 at 5.39.47 PM.png
    Screen Shot 2015-08-05 at 5.39.47 PM.png
    174.8 KB · Views: 119
I am probably making a mistake getting involved in this argument...

We can all agree that reducing the power consumption of a video card reduces its performance. There is no magic wand that Apple can wave to get increased performance compared to retail cards at reduced power consumption. This applies to both Nvidia and AMD's top end cards. The interesting point made in the Tom's Hardware plot is that average frame rates are relatively unaffected by decreases in power, but the minimum frame rates are significantly impacted. Looking at both plots, there becomes a point of diminishing returns, and thats exactly where AMD has decided to sell the card at stock clocks. Both of these metrics are valid ways of measuring a GPU's performance, and most good reviewers include both. "Playable" is arbitrary though, and many gamers would argue that 40 fps average frame rate is not playable.

Unfortunately, Apple does not sell a machine with a video card that exceeds ~150 W. Whichever vendor they choose, clocks will have to be reduced. It looks like they are favoring AMD, most likely for their performance in OS X, openGL, openCL, and perhaps other reasons like willingness to make custom form factors. They are certainly not choosing the video card based on directx windows benchmarks. You can argue until you are blue in the face about how good at windows gaming it is, but that is simply not a priority for Apple.

I recommend taking a look at Barefeats, as they have some OS X relevant benchmarks. A good point to look at is how retail cards like the 7970/R280x compare to the D700. This will show how a retail card with faster clocks compares to the reduced clocks found in the Mac Pro. I am sure deep in some lab at Apple they are testing Maxwell, Hawaii and Fiji at reduced frequencies to figure out which to stuff in the Mac Pro. If we are lucky, they have already figured it out and its being mass produced.
 
  • Like
Reactions: ixxx69
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.