Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Volta matches or beats Vega 56 on performance/watt....

Is it available in consumer space? Will consumer GPUs have the same tech?
Chill only works with DX titles and obviously these advanced power savings features don't exist in our boring macOS drivers. We get a bare bones AMD driver just like the Nvidia web driver. No regular optimisations, no HEVC decode on GPU, etc
Yep. Unfortunately.
Not cool.
Pun intended? ;)
 
Goalposts in motion.
Obviously.

You were the one to shift the goalpost from performance/watt/price, to performance/watt. So who is the one shifting goalposts? That is why I ask: will your Volta GPUs use the same tech, in consumer space?

P.S. So you basically end up with similar performance per watt, buying NVidia Volta GV100 GPU, but you have to pay how many times more for it? 45 times more?
 
Last edited:
Obviously.

You were the one to shift the goalpost from performance/watt/price, to performance/watt. So who is the one shifting goalposts? That is why I ask: will your Volta GPUs use the same tech, in consumer space?

P.S. So you basically end up with similar performance per watt, buying NVidia Volta GV100 GPU, but you have to pay how many times more for it? 45 times more?
Please define "performance per watt per price", as distinct from "performance per watt" and "performance per unit of currency".
 
  • Like
Reactions: tuxon86
Volta matches or beats Vega 56 on FP16 performance/watt....


That actually seems pretty good. Especially given how huge V100 is. I'll be interested to see what the real efficiency numbers look like once reviews are out. This could get even better since Vega 56 is the cut down part whereas Vega Nano would be the full chip and at a better point on the power efficiency curve.

Compute wise I think Vega will be fairly competitive, although I haven't seen any machine learning benchmarks which is something they have been targeting this at. I think the biggest question is its efficiency doing graphics tasks, which Nvidia has been absolutely dominating since Maxwell.
 
Please define "performance per watt per price", as distinct from "performance per watt" and "performance per unit of currency".
Similarly to why Ryzen 1700 is actually best product we have had for last 5 years in CPUs.

Its always how much product gives you with each watt you have to feed it, and how small amount money you have to pay for it.
 
For compute customers, if AMD can offer 75% of the performance of V100 for 10% of the price, that is a winning strategy.
 
For compute customers, if AMD can offer 75% of the performance of V100 for 10% of the price, that is a winning strategy.
Which is based on "performance/price", not "performance/watt/price".

Anyway, since "performance/watt/price" isn't well-defined, I parsed the earlier post as "'performance/watt' and 'performance/price'".

It also depends a lot on how important FP16 is to your workflow - it's not generally useful.
 
Which is based on "performance/price", not "performance/watt/price".

Anyway, since "performance/watt/price" isn't well-defined, I parsed the earlier post as "'performance/watt' and 'performance/price'".

It also depends a lot on how important FP16 is to your workflow - it's not generally useful.

It is extremely useful for deep learning, but then Volta has the tensor cores for 120 TFLOPs which kind of invalidates the previous comparison.
 
It also depends a lot on how important FP16 is to your workflow - it's not generally useful.

Sure, in this case though it directly scales with FP32 performance. FP64 performance is dramatically better on V100 though. V100 spends much of its silicon budget on tensor cores and FP64 compute.
 
Similarly to why Ryzen 1700 is actually best product we have had for last 5 years in CPUs.

Its always how much product gives you with each watt you have to feed it, and how small amount money you have to pay for it.
The 1600 is better value, and it's quite powerful.
 
Has anyone heard any rumors or seen any information about how the Vega, specifically the 56 model (the only valid model for Mac Pros) handles VR? The GTX models are optimized up the ying-yang for VR. I'm hoping AMD decided to do the same with these...

If so, I'm leaning towards the RX 56 for my next GPU.
 
Has anyone heard any rumors or seen any information about how the Vega, specifically the 56 model (the only valid model for Mac Pros) handles VR? The GTX models are optimized up the ying-yang for VR. I'm hoping AMD decided to do the same with these...

If so, I'm leaning towards the RX 56 for my next GPU.

In Windows, fine obviously.

VR on a Mac. Well, the minimum spec will be higher than on the PC because drivers are slower. Unless VR titles use Metal and have lower frame rates/quality than the PC versions.
 
Yea, I'm specifically talking about in Windows. GTX cards have the ability to render both eyes in one swoop (the vast majority of the software does not support this yet), as well as other VR enhancements. I'm just wondering if AMD will carry these same VR enhancements.
 
  • Like
Reactions: Flint Ironstag
It is extremely useful for deep learning, but then Volta has the tensor cores for 120 TFLOPs which kind of invalidates the previous comparison.
I'd be surprised if deep learning or TensorFlow matters to much of the readership here. As I said, FP16 is not "generally useful" - it's very important for some specialized tasks, but not generally useful.

I don't know why the earlier poster said anything about FP16 and the confused metric of "performance per watt per dollar", unless she did it to bump her thread back to the top of the first page.
[doublepost=1502232119][/doublepost]
Just wait until it becomes "performance per watt per price per clock", which is what it'll come to next.
:D

That would be "turbo goalpost in motion"....
 
Last edited:
I'd be surprised if deep learning or TensorFlow matters to much of the readership here. As I said, FP16 is not "generally useful" - it's very important for some specialized tasks, but not generally useful.

I don't know why the earlier poster said anything about FP16 and the confused metric of "performance per watt per dollar", unless she did it to bump her thread back to the top of the first page.
[doublepost=1502232119][/doublepost]
:D

That would be "turbo goalpost in motion"....
I know its hard for Nvidia users to understand that you can pay less for hardware you use for Machine Learning, but it appears that in upcoming months in this market "boom" will happen because of hardware that is affordable. Show me GPU that had 21 TFLOPs of FP16 that costed 400$ in previous years.

DX12 and Vulkan Games will use a lot of FP16 also. It will start becoming the most important metric in gaming for those games, and VR applications using Vulkan and DX12(actually, Vulkan because it can be used everywhere).

Maybe you are not aware of how important FP16 is rapidly becoming? Nvidia was marketing this just one year ago. Now the goalpost moved, because Nvidia said so? I thought you were professional tied with Machine Learning, so you should know all this.

That is why I posted: RX Vega 56, with 21 TFLOPs FP16, and 210W TDP, will have best performance/watt/price ratio in upcoming months, for FP16 market.

P.S. How big total cost would be buying specified Tensor SOC with 400$ GPU with 21 TFLOPs FP16, and what performance you can get out of this combo? ;)

Will it be cheaper than buying GV100, for the same end results? ;)
 
DX12 and Vulkan Games will use a lot of FP16 also.

Source? The only thing I've seen about this recently is on an AMD marketing slide, which is clearly heavily biased (i.e. AMD wants game developers to start using more FP16 so they have an advantage when compared with consumer Pascal cards).
 
specifically the 56 model (the only valid model for Mac Pros)

RX Vega Nano also fits within the power requirements (more easily in fact), although that has a later release date.

If you were dead-set on Vega, I'd be patient and wait to see this card in action before spending any money.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.