Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Or, like I've mentioned many times in the past, these (and plenty of other) tests are not solely limited by raw compute horsepower (i.e. TFLOPs). Many of those SPECviewperf tests render models with 10s of millions of triangles, and should be considered more of a graphics test than a raw compute test.

It would in a sense, but since the card does have to process those "tens of millions of triangles" it ends up being a compute test as well in the end.

Vega just suffer from the "recent" bad habit of AMD marketing dept. of over hyping/selling their graphic cards. They did it with the RX 480/580 and the Fury. Those are all functional gpus but instead of just getting them out there for the market to judge them on their merit, AMD marketing dept. always try to hype them to high even just to disappoint when people start using them for real.

In the end, a good clean up in the RTG and marketing division at AMD would do a lot of good.
 
It would in a sense, but since the card does have to process those "tens of millions of triangles" it ends up being a compute test as well in the end.

Vega just suffer from the "recent" bad habit of AMD marketing dept. of over hyping/selling their graphic cards. They did it with the RX 480/580 and the Fury. Those are all functional gpus but instead of just getting them out there for the market to judge them on their merit, AMD marketing dept. always try to hype them to high even just to disappoint when people start using them for real.

In the end, a good clean up in the RTG and marketing division at AMD would do a lot of good.

It processes them by rendering those triangles, which is not a compute test (it's a graphics test). AMD's architectures tend to be less efficient at graphics work and usually struggle to keep all their massive computing horsepower (i.e. the raw TFLOPs) busy when rasterizing triangles. I don't know why this is the case, but it's been like that for a long time. This is one reason why AMD does worse at games and other non-strictly-compute tests than you'd expect them to.
 
It would in a sense, but since the card does have to process those "tens of millions of triangles" it ends up being a compute test as well in the end.

The other thing about these tests is it seems like the pro version of these drivers has additional capabilities added. For instance in some of the tests the GP104/GTX 1080 based Quadro P5000 beats out the GTX 1080 Ti which doesn't make any sense just looking at raw compute performance. Its not clear whether AMD has this sort of segmentation with Vega FE card and what is enabled or disabled.

Vega just suffer from the "recent" bad habit of AMD marketing dept. of over hyping/selling their graphic cards. They did it with the RX 480/580 and the Fury. Those are all functional gpus but instead of just getting them out there for the market to judge them on their merit, AMD marketing dept. always try to hype them to high even just to disappoint when people start using them for real.

In the end, a good clean up in the RTG and marketing division at AMD would do a lot of good.

This launch does seem especially bad. If the drivers are incomplete, why release it now? If the graphics performance really is that bad then they are in trouble. Was it really just so an executive could add a bullet to their performance review that Vega got released in Q2?
 
This launch does seem especially bad. If the drivers are incomplete, why release it now? If the graphics performance really is that bad then they are in trouble. Was it really just so an executive could add a bullet to their performance review that Vega got released in Q2?

They're late and they've been promising something in 1H 2017 for a very long time now. I guess they made the calculation that releasing the FE in its current state was better than delaying the launch yet again.
 
It processes them by rendering those triangles, which is not a compute test (it's a graphics test). AMD's architectures tend to be less efficient at graphics work and usually struggle to keep all their massive computing horsepower (i.e. the raw TFLOPs) busy when rasterizing triangles. I don't know why this is the case, but it's been like that for a long time. This is one reason why AMD does worse at games and other non-strictly-compute tests than you'd expect them to.

Maybe another avenue for AMD would be to concentrate on compute only card instead of mainstream GPUs on the "pro" market. A cheaper GV100 alternative for example. Something that prosummer could buy, for less than what NVidia charges for a GV100, and complement an existing well performing GPU, like a 1080ti for the viewport.

Instead they are trying to beat NVidia where they are the strongest and they keep failing and damaging their reputation.
 
They're late and they've been promising something in 1H 2017 for a very long time now. I guess they made the calculation that releasing the FE in its current state was better than delaying the launch yet again.

Bingo!

Can we please stop cherry picking graphs and tests that show our side winning? Yes, I've said before, I'm an AMD fan. The company reminds me of Apple circa 1997-2000. They're turning things around. I LOVE the fact that Apple has based their entire lineup on AMD chips. However, I'm praying that they clean up eGPU solutions so that NVIDIA can be added to any Mac and the I HOPE that by modular Mac Pro, Apple means PCIe slots so that we can add NVIDIA cards in if we so choose (or give us the option to choose NVIDIA cards at purchase.) the point is, Vega WILL get better as it matures and having the choice to run strong GPU's from either company benefits us!
 
  • Like
Reactions: Flint Ironstag
Maybe another avenue for AMD would be to concentrate on compute only card instead of mainstream GPUs on the "pro" market. A cheaper GV100 alternative for example. Something that prosummer could buy, for less than what NVidia charges for a GV100, and complement an existing well performing GPU, like a 1080ti for the viewport.

Instead they are trying to beat NVidia where they are the strongest and they keep failing and damaging their reputation.

One of AMD's challenges is they don't have the engineering bandwidth to compete with Nvidia for specialized products up and down the product line. So Nvidia can make a specialized compute card for high end clusters but AMD has to make a card that does double duty for compute and graphics. Vega 10 is the chip that is supposed to compete with GP100/GV100 if not on performance then at least value.
 
Please watch this video. You will find that RX Vega is EXACTLY what AMD said it would be: "faster than a FE 1080." It also lends credence to what Asgorath said earlier about the card being late. It wasn't the marketing dept that overhyped Vega and the Polaris cards, it was us fanboys! I'm guilty as charged!

 
One of AMD's challenges is they don't have the engineering bandwidth to compete with Nvidia for specialized products up and down the product line. So Nvidia can make a specialized compute card for high end clusters but AMD has to make a card that does double duty for compute and graphics. Vega 10 is the chip that is supposed to compete with GP100/GV100 if not on performance then at least value.

I was more thinking in the line of a Physx card, something a pro-sumer can afford to suplement its present GPU, not a real competitor to the GV100 which no one can really justify the cost for single user use. The dedicated mining card without video connectors idea is what I have in mind.

For me, AMD performance problem are linked to viewport performance, not internal compute power. Even with all those big number on paper they still struggle to display the result at high framerate compared to NVidia's offering.
[doublepost=1499447768][/doublepost]
Please watch this video. You will find that RX Vega is EXACTLY what AMD said it would be: "faster than a FE 1080." It also lends credence to what Asgorath said earlier about the card being late. It wasn't the marketing dept that overhyped Vega and the Polaris cards, it was us fanboys! I'm guilty as charged!


Nah, us fanboys didn't come out on stage and say "Poor Volta" or such thing.
That and the question why would a company release a product with incomplete software to exploit it and still expect customer to take them seriously (drivers). The latest test shows that the "Game Mode" does absolutely nothing so why would they even include it and mislead potential customer that are spending $1k on it. People aren't buying future promises, they are buying a gpu for today.
 
Last edited:
I was more thinking in the line of a Physx card, something a pro-sumer can afford to suplement its present GPU, not a real competitor to the GV100 which no one can really justify the cost for single user use. The dedicated mining card without video connectors idea is what I have in mind.

For me, AMD performance problem are linked to viewport performance, not internal compute power. Even with all those big number on paper they still struggle to display the result at high framerate compared to NVidia's offering.

They are coming out with their M series cards which are compute only targeted cards. I'm sure they are aiming at clusters and servers so they will be expensive. They are starting to release some consumer oriented mining cards but I don't see much benefit besides shaving a few dollars off the price since you are basically just leaving off the DisplayPort connectors.

Since AMD adopted GCN its been a compute first card. It is also tends to be more parallel which means they can struggle to keep those pipes filled. I think next gen APIs like DX12 and Metal help their case a lot here but it doesn't necessarily help every workload.
 
They are coming out with their M series cards which are compute only targeted cards. I'm sure they are aiming at clusters and servers so they will be expensive. They are starting to release some consumer oriented mining cards but I don't see much benefit besides shaving a few dollars off the price since you are basically just leaving off the DisplayPort connectors.

Since AMD adopted GCN its been a compute first card. It is also tends to be more parallel which means they can struggle to keep those pipes filled. I think next gen APIs like DX12 and Metal help their case a lot here but it doesn't necessarily help every workload.

I agree.
 
RX 460 4GiB: check.

All RX cards with at least 4GiB VRAM are in the shortage watchlist now.
 
This is what one company is doing about miners. LOL.
 

Attachments

  • IMG_3502.JPG
    IMG_3502.JPG
    437.8 KB · Views: 139
  • Like
Reactions: theitsage
There is zero overlap between mining and gaming studios.
If the miners buy most midrange cards, studios will need to target weaker GPUs.

But people will have money for more powerful CPUs: more simulation, less graphics.
 
Last edited:
If the miners buy most midrange cards, studios will need to target weaker GPUs.

But people will have money for more powerful CPUs: more simulation, less graphics.

If miners keep buying cards, AMD and Nvidia will make more and sell more cards. Besides, the current mining boom will crash soon enough just like the last one. Then the market will be flooded with cheap cards.
 
If miners keep buying cards, AMD and Nvidia will make more and sell more cards. Besides, the current mining boom will crash soon enough just like the last one. Then the market will be flooded with cheap cards.
Globalfoundries is only investing in improving their 14nm capacity by 20%. They are focusing now on bringing 7nm and increasing 22FDX.

Card manufacturers are hesitant on increasing production because of the risk of getting stuck with stock.

If there will be a flood of 3GiB and 4GiB cards soon, depends on whether there will be a switch to other cryptocurrencies or not.
 
As SoyCapitan already stated multiple time, the current coin du jour will be hitting a ceiling with RX mining capability due to a limitation in the software. After that you can expect miner to move either to another coin or to sell off their RX card and by something else not affected by that limitation.
 
Another issue is how long the ex-mining cards will still work.

Long enough... People who are into buying 3rd party gpus tends to replace them by skipping a gen or two which mean the card as only to work for a year or two, maybe 3 at most before the user will move to the next shinny thing.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.