Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
Only thing that I would like to add to whole this:

ASUS Strix Fury is rated at 216 W TDP. Cut down, high voltage, high clock GPU.
Fury Nano is rumored to be full Fiji with 4096 GCN cores, and 175W of TDP.
Downclocked, undervolted Fury X from that thread appears to be running on 1020 MHz. At 225W of MAX power draw.

I still say that Apple will simply go for 850 MHz Core clock even if they could make it go even higher. 900-925 MHz would be possible in 125W power envelope.
 
The reason for that small drops in performance regarding limited power supplying from CCC or whatever App for it is the scale of a GPU. Fury has 4096 cores. Its extremely wide. Every core needs their won portion of power. Every core has its own clock. Therefore increasing the Clock rate increases significantly power consumption, but letting it go slower, reduces Power consumption a lot but does not affect in the same way the overall performance.

That is exact opposite for Maxwell GPUs. TomsHardware review of GTX980 and 970 shown, that in GPGPU situations non reference cards sucked only 20W less than R9 290X, while still being less powerful than R9 290X. To get to 4.6 rated TFLOPS of compute power it needed to sustain constantly highest Turbo mode clock, thats what made it consuming 277W of power. If you will get 165W clock cap you will never get to that amount of compute power.

Thats because Maxwell GPUs are very high clocked, but relatively "thin" with big internal caches. Thats what makes them really efficient in games, but really inefficient in GPGPU. GCN cores as we can see can be squeezed to 125W to get 3.5 TFLOPS of compute power whereas GTX 980 squeezed to 125W will be much less powerful. When they get BIOS cap with clock, the power consumption will not exceed the 165W limit, or be slightly higher, however it will not get to the potential of Maxwell GPU. Thats why Green500 says the GCN architecture is the most power efficient architecture that has EVER been on this planet. Power efficiency is not how small amounts of power your GPU uses. But how much power you get from 1W of power consumed. 125W GPU from AMD gets 3.5 Tflop of Compute power. 120W GPU from Nvidia gets 2.3 TFlops of compute power. Which one is more power efficient?

MVC, Im starting to think that you have problems with reading and understanding what you are reading. How can someone think that saying that GPU gets a lot better power efficiency with downclocking and undervolting GPU core clock and OC on Memory means that GPU overclocks great as a whole? Only you believe that. MVC get real, People have proven that AMD GPUs are getting a lot of efficient boosts from downclocking and undervolting GPU. It was already shown in this thread. Hawaii could run on 145W of TDP while having 850 MHz core, Grenada is even more power efficient than Hawaii(Check the power consumption of HiS IceQ(which is closest to "reference" 390) model which gets 1000(5% higher than R9 290) MHz core clock on 390 and 384 GB/s(64 GB/s more than R9 290), and double the amount of memory, while using 5W less power than R9 290).

Every AMD GPU is suitable for Mac Pro, no matter how you will argue with that.

Hey, you wanted to show that Fury-X wasn't "Oc'ed to Hell"

Wasn't that your point in putting links to an article that proved it was?

Why all this typing? Just admit you were wrong and walk away.

Will Apple find a way to shoehorn a Fury into nMP? Possibly. But so far you haven't proven anything other then what I started with, the Fury-X is a maxed out Fiji. Anything that comes in the future will be slower because it can't go any faster.

Simple statemnet I made, somehow you wanted to disprove it but can't. Thanks I guess.

And STacc, yes, you also proved it. 7970 had to be throttled down to fit in 125 Watts. If Fury uses MORE power then 7970, it will need to be throttled EVEN MORE to fit in nMP. Not sure what other conclusions can be reached. Apple bet on the wrong horse.
 
  • Like
Reactions: tuxon86
MVC, forgive me for asking this, but how hard can it be to understand that Core may be overclocked, and yet still overclockable, but bigger gains come from OCING MEMORY?! How hard for you is it to understand that all the time that was what I was showing you? How hard for you is understanding that biggest gain in performance you get from higher bandwidth from Memory? How hard can it be to understand that is exactly what I was talking about all the time.

900 MHz and 640 GB/s FirePro card from Mac Pro would be faster than stock Fury X in apps that benefit from OCing the memory, while using less than half the amount of power needed for running stock Fury X. You would loose a bit of Compute power(7.4 TFLOPs for 4096 GCN core 900 MHz GPU core).
 
Relevant to the discussion is this forum post here, where someone tries to limit the power consumption of a Fury X to potential Fury Nano limits and sees very little decrease in performance. The catch here is I don't see any great measures of power consumption, so who knows if limiting power through the graphics drivers is actually doing anything.
It turns out that at 225W you get 1035 MHz max core clock. That is extremely impressive.

And again: http://www.overclock.net/t/1565417/dglee-new-r9-nano-pictures-and-unigine-heaven-results-from-amd
 
Turns out, that MVC may have not been right in saying that Fury X is OC'ed to hell.

That's what you said.

Then you posted an article where the author summed up with "Looking at the numbers, I'm not sure if a 150W power draw increase for a mere 3 FPS increase is worth it for most gamers."

So your linked article in fact proved you wrong.

And at 225 Watts, you still need to lose 100 Watts PER CARD to fit in nMP. That isn't going to come with no pain. Apple can't wave a magic wand and make it go away. For 2 cards, you need to lose 200 Watts, or 50% of nMP total power. Where is it going to go? Lower the clocks some more maybe?

You are still trying to fit 5 gallons in a 2 gallon bucket. No currently available form of Fiji even comes CLOSE to fitting in nMP power & heat envelope. Not even CLOSE. Maybe in 2 years when they refine the fab and can bin the bejesus out of them. And then it will be like the D700, 2 years old on launch day.

Or maybe they will ship it with nMP, 2 @ radiator & fan combos and an additional power supply? That sounds just like Apple.

900 MHz and 640 GB/s FirePro card from Mac Pro would be faster than stock Fury X in apps that benefit from OCing the memory, while using less than half the amount of power needed for running stock Fury X.

Yes, it "would be", but if you read that lovely article that you yourself linked, the Fury-X using unlimited power and water-cooling could only hit 560 Mhz on the RAM. So a little math correction is needed. New bandwidth should be 573 Gb/s. Maybe time for new batteries in your calculator?
 
Last edited:
I find it interesting that 2 of you reference massive computers that are cylindrical for your argument. We are talking about a lot smaller desktop computer here. We are bound by economy of scale (components) by its very size. I too would love to see " bent " silicon but you can only work an efficient design with components available to you, and they are not round or cylindrical just yet (heres hoping) Apple made a design choice, not an efficient choice which is obvious by it lack of upgradeability. It is littered throughout all their products. Looks great but you have to buy new or pay Apple exorbitant prices to fix them when they break.
 
Last edited:
Hmm, another factual error.

https://www.asus.com/Graphics-Cards/STRIXR9FURYDC34GGAMING/specifications/

Asus says "Power Consumption Up to 375 W"

Maybe you should get in touch and let them know they have it wrong?

Or admit that Fury and nMP aren't a match made in heaven, no matter how low you crank the clocks.
http://www.anandtech.com/show/9421/the-amd-radeon-r9-fury-review-feat-sapphire-asus/3
Yes, they have it wrong, because they show maximum available power based on calculations from molex and PCIe. If I would believe what is written on official slides of every company, then I should believe also that Fury X has 450 GB/s bandwidth:
http://i.imgur.com/e9Rltsn.jpg Companies sometimes completely don't know what they write on specs.

Also its worth noting that TechPowerup numbers from Asus Strix show that the GPU really has 216W of TDP.
That's what you said.

Then you posted an article where the author summed up with "Looking at the numbers, I'm not sure if a 150W power draw increase for a mere 3 FPS increase is worth it for most gamers."

So your linked article in fact proved you wrong.

And at 225 Watts, you still need to lose 100 Watts PER CARD to fit in nMP. That isn't going to come with no pain. Apple can't wave a magic wand and make it go away. For 2 cards, you need to lose 200 Watts, or 50% of nMP total power. Where is it going to go? Lower the clocks some more maybe?

You are still trying to fit 5 gallons in a 2 gallon bucket. No currently available form of Fiji even comes CLOSE to fitting in nMP power & heat envelope. Not even CLOSE. Maybe in 2 years when they refine the fab and can bin the bejesus out of them. And then it will be like the D700, 2 years old on launch day.

Or maybe they will ship it with nMP, 2 @ radiator & fan combos and an additional power supply? That sounds just like Apple.



Yes, it "would be", but if you read that lovely article that you yourself linked, the Fury-X using unlimited power and water-cooling could only hit 560 Mhz on the RAM. So a little math correction is needed. New bandwidth should be 573 Gb/s. Maybe time for new batteries in your calculator?
Im sorry, but I'm not responsible for what you understand, or not understand. Looks also like you never heard of Silicon lottery, and you forget that I already posted a link in this thread to a site, where they OCed Fury X Memory to 600 MHz. One HBM can be OCed to 600 MHz, and another to 560. We already know that best binned cards are coming to Apple(Tonga from iMac - Apple bought all the available supply of full Tonga from AMD). Best binned cards are also the ones which can achieve highest amounts of efficiency - which means, can be undervolted to maintain in power envelope. Which is exactly the case of FirePro D700.

Again you cannot understand that TechPowerup in all their effort made a big mistake. They tested OC on only one game, which does not react to it completely. Benefits much more from higher bandwidth than from OC on core, which is fair enough.


I will say this again. AMD is able to get FULL FIJI core to 175W of max TDP with typical power draw around 157W, which I think you, MVC, completely forgot. Or ignored, just because you can. By the numbers we know it looks like Fury Nano will be running on 950 MHz-975 MHz. Fury X from that thread linked above goes from 375W TDP with 290W nominal power draw to 225W Max power draw with only 15 MHz smaller Core clock.

How long this argument can go? Im ending this now. I will not respond to any more posts on this topic from you MVC, regardless of what you will say.

P.S. Dual Fiji setup running at 850 MHz will get 14 TFLOPS of performance. Over twice more than Titan X in the same power envelope.
 
If only Apple could harness the power erupted by knowledgeable engineers arguing... It would exceed the energy of all the islands in the Pacific!

Seriously, it's been very informative and I genuinely feel I've been learning occasional nuggets from the highly technical world of GPU design. So while you might have to agree to disagree, I'd like to thank you for the insights.

I'm just worried that to achieve what everyone can agree on - that Apple and AMD are working to get Fury tech into a future Mac Pro - will take longer than this thread's title.

How long do you think it would take to go from just released silicon to housing it on one of Apple custom Mac Pro video boards?
 
How long do you think it would take to go from just released silicon to housing it on one of Apple custom Mac Pro video boards?
Designing the board is probably not what would hold it up. Apple could have had one ready on release. It sounds like their are significant supply constraints on the GPU or the memory, which has limited its availability. However, supply certainly does not mean that Apple will release a revised Mac Pro. Apple may be waiting on other components such as a new processor from Intel.
 
TSV - that was in limited supply for HBM.

Also, there is a rumor, that AMD was working on 8GB version of HBM. The result is not known, however. Gamescom is possibly the earliest time where we will hear about Fury Nano and future AMD plans for Fiji GPUs.
 
Apple made a design choice, not an efficient choice
they sure went through a helluva_lotta trouble just to make it round and inefficient ;)

pkPrnKKQnGhDI1Ho.huge.jpeg

though some would argue (including the actual designers/engineers), the whole idea is based around efficiency.. power/heat dispersion/wattage/space.. the cylindrical shape is a byproduct of that efficiency..

as in, it's not likely they started with the shell design then figured out a way to fit components into it.. more likely the idea of a tube aesthetic worked in conjunction with the physics of heat dispersion/air movement.. or the aesthetics and functionality were both being considered along the way and one didn't trump the other.. not form over function.. rather- form and function..
 
Also, there is a rumor, that AMD was working on 8GB version of HBM. The result is not known, however. Gamescom is possibly the earliest time where we will hear about Fury Nano and future AMD plans for Fiji GPUs.
This isn't possible for first generation HBM. Looking at pictures of Fiji, it may be possible to fit another stack on both sides of the die, giving 6 GB of HBM. My guess is this is something that can not be modified without modifying the GPU itself, which won't happen until the next generation of HBM, which significantly increases the maximum capacity.

The rumor most likely stemmed from the confusion of whether Fiji or Hawaii would be in the 390X. Chances are someone saw some specs that showed 8 GB in the 390X, which turned out to be Hawaii with GDDR5 instead of Fiji.
 
http://www.anandtech.com/show/9421/the-amd-radeon-r9-fury-review-feat-sapphire-asus/3
Yes, they have it wrong, because they show maximum available power based on calculations from molex and PCIe.
I will say this again. AMD is able to get FULL FIJI core to 175W of max TDP with typical power draw around 157W, which I think you, MVC, completely forgot. Or ignored, just because you can. By the numbers we know it looks like Fury Nano will be running on 950 MHz-975 MHz. Fury X from that thread linked above goes from 375W TDP with 290W nominal power draw to 225W Max power draw with only 15 MHz smaller Core clock.


P.S. Dual Fiji setup running at 850 MHz will get 14 TFLOPS of performance. Over twice more than Titan X in the same power envelope.



So now the international conspiracy to make Fury look like a power hog INCLUDES Asus, one of the manufacturers?
But you, a consumer sitting at home who has never laid hands on one, knows better then the manufacturer?
That's not hubris at all, is it?

And I hate to break it to you, but 225 Watts is still 100 Watts too much PER CARD, or 200 Watts total beyond nMP PSU.
So the Fat Lady is going to have to eat a few more salads before she can sing from inside the nMP.
And am I the only one to see the absurd folly of all this discussion?

We are debating how much they will have to strangle the Fury to make it fit in an expensive workstation. Meanwhile, the cMP can already have cards in it that beat the Fury-X, with no artificial limits placed on it by lack of power infrastructure.

So, maybe, good golly gosh, if we are super duper lucky, someone will figure out a way to only mildly strangle the 3rd fastest card on the market so that it can function on 125 Watts. And if all goes well, we'll only have to wait 6 more months to find out how much slower it is then the cards you can already put in cMP. Isn't this insanity?

(Sing along with the Beach Boys)

Wouldn't it be nice if Fury used less power,

So it could run in the new Mac Pro.

And wouldn't it be nice if it wasn't so hot,

So it wouldn't fry the CPU.

Imagine if it were a real computer,

Then we wouldn't have to make excuses.

Oh wouldn't it be nice?
 

no worries, wasn't going to happen

but it really is silly

the guys on the HP forums don't sit and hope and pray and cross their fingers that AMD will make them a neutered, de-clocked and de-volted version of their top GPU that is 3/4 as good as the cards that everyone else gets from Newegg just so it fits into the arbitrary whims of what HP thinks a GPU "should" use for power and how much heat it is "allowed" to give off

that's not how real computers deal with GPUs
 
no worries, wasn't going to happen

but it really is silly

the guys on the HP forums don't sit and hope and pray and cross their fingers that AMD will make them a neutered, de-clocked and de-volted version of their top GPU that is 3/4 as good as the cards that everyone else gets from Newegg just so it fits into the arbitrary whims of what HP thinks a GPU "should" use for power and how much heat it is "allowed" to give off

that's not how real computers deal with GPUs

Hasn't Apple under-clocked most of their GPU's past years? rMBP, iMac and now nMP joined the club... and the trend will go on.. iMac is a desktop, but comes with a mobile GPU.. TDP's count more than FPS's.

I would like to see a normal desktop too. Custom nMP with one GPU (up to 275w), normal desktop CPU and the usual suspects.. Sadly it would be too standard machine to justify Apple-tax = not gonna happen.
 
Hey MVC, for Apple the plain standard box has gone the way of the xserve and the dodo, no point in us hurting ourselves wishing for a standard video card in a near future. We're just trying to consoles ourselves with examining what tweaks to existing parts would be necessary to make them fit in a nMP, no matter how less powerful they will be compared to similar GPU in a classic mac pro or standard pc. Of course it will be less powerful.

Personally I think the fury are unlikely because i read they they lack the FP64 units that would justify putting them in a nMP, but i might have misread that.

What is most annoying though is the lack of Crossfire under OSX. That works under windows on the same hardware, why not put it in OSX? Now there are standard machines coming out of the factory lines that have two GPU. I don't know if apple or amd is to blame for that one though. And maybe the market is really too tiny for it to be worth it.
 
And I hate to break it to you, but 225 Watts is still 100 Watts too much PER CARD, or 200 Watts total beyond nMP PSU.
So the Fat Lady is going to have to eat a few more salads before she can sing from inside the nMP.
And am I the only one to see the absurd folly of all this discussion?

We are debating how much they will have to strangle the Fury to make it fit in an expensive workstation. Meanwhile, the cMP can already have cards in it that beat the Fury-X, with no artificial limits placed on it by lack of power infrastructure.
http://s28.postimg.org/a8bdgk7pp/Capture.png
http://s30.postimg.org/431cr5ej5/Capture.png
http://s28.postimg.org/4vr4g6971/Capture.png
Beats the Fury? In Face detection? Or Gaming. I thought that is not why people buy cMP.
http://scr3.golem.de/screenshots/15...uxmark-v3-(complex-scene,-gpu-only)-chart.png
http://scr3.golem.de/screenshots/15...uxmark-v3,-complex-scene,-gpu-only)-chart.png
Looks like your Titan X cards are slower and use more power while rendering in OpenCL.

Oh and you know what? AMD finally cracked the OS X Drivers for El Capitan. And people, by the looks of it, already see results of that in the El Cap Beta thread.

We know that you want to sell more GPUs, but you better find better way of advertising them on this thread.

One more thing. Maybe Full Fiji will be downclocked on MP. But there will be TWO GPUs with total Power Limit of 250W. Which will mean that at 850 MHz 8192 GCN cores will have 14 TFLOPS of compute power. Over twice the power from MVC badged Titan X at the same power envelope. Now look at the numbers in this post from reviews and count the efficiency.

Thats what makes a difference.

Last post from me to MVC in this thread, on this topic.
 
I wonder if MVC really needs the power of a full fledge GPU in the nMP or if these posts are only for the discussion's sake.
I see no added value in all this fighting AMD GPUs because they're downclocked. It's Apples' design, and for me it sure seems to look just fine, so why bother being always cracking at it?
If this is not your beach, why bother even replying at all?
You would make better use of your time going the HP forums, and let those who really care for the nMP exchange some ideas and thoughts regarding what could be coming or what we wish might be coming.
I usually don't get into this kind of argument but it seems a little too much right now, and we should end this topic asap.
We all know there won't be regular PCIe cards on the nMP, and that probably the design won't change soon.
So, GPUs will always be custom made, with the power restrains we know.
Will it be Fiji based or Grenada? I guess we'll have to wait a little more to find out.
But please be done with the "nVidia is great and AMD sucks" kinda comments. I'm no AMD fanboy, much on the contrary, haven't had an AMD GPU in any of my PCs, only nVidia, and I build them all. But I don't go around sinking AMD, nor NVidia for that matter.
Live with it, and let others appreciate the machine that fits their needs.
There are other manufacturers out there that will happily sell you a powerful workstation.
Maybe you could even go Hackintosh, you obviously know your way around the hardware and firmware, make yourself a custom Pro machine. Take a cMP case, fit in a powerful PC motherboard, an i7 or Xeon CPU, quad TitanX, some SSDs and there you go, a dream machine.
With that setup, you can't complain...
 
  • Like
Reactions: askunk and koyoot
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.