Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
as far as I can tell, it's still all he-said, she-said. if they were offering rebates, saying the SEP didn't change would technically hold true.

either way, the practical effect is that AMD GPU's are not stocked and not well priced, and their bundle strategy actually just exacerbated the problem.
 
Simplest answer. They were NOT offering rebates. End of the story. Gibbo must change his sources. Because its second time with this launch when he was mislead by his sources.
 
I suggest also watching his point of view on this in the context of prices, and double standards of this industry.
I usually like his videos, but I thought this one was kind of adding fuel to the hype.
 
Adding more fuel to the card price increase FUD:

https://www.reddit.com/r/hardware/c...re_buying_amd_rx_vega_64_at_675_each/dm20et2/

One quote from the post:

So once these vendors got their inventories, and ran out of their stand alone cards, not wanting to sell the high dollar Packs, they started selling the Pack cards as stand alone, but at the same markup, which was of course $100 higher because it was a Pack card, with the higher price. Which is where this "AMD Price Bump™" ******** came out of.
 
Adding more fuel to the card price increase FUD:

https://www.reddit.com/r/hardware/c...re_buying_amd_rx_vega_64_at_675_each/dm20et2/

One quote from the post:

So once these vendors got their inventories, and ran out of their stand alone cards, not wanting to sell the high dollar Packs, they started selling the Pack cards as stand alone, but at the same markup, which was of course $100 higher because it was a Pack card, with the higher price. Which is where this "AMD Price Bump™" ******** came out of.

So yet another thing that can be blamed on miners and a terrible response to there presence?
 
https://forum.beyond3d.com/posts/1997699/

Ryan Smith, from Anandtech:

Quick note on primitive shaders from my end: I had a chat with AMD PR a bit ago to clear up the earlier confusion. Primitive shaders are definitely, absolutely, 100% not enabled in any current public drivers.

The manual developer API is not ready, and the automatic feature to have the driver invoke them on its own is not enabled.
 
  • Like
Reactions: throAU and xsmi123
https://forum.beyond3d.com/posts/1997699/

Ryan Smith, from Anandtech:

Quick note on primitive shaders from my end: I had a chat with AMD PR a bit ago to clear up the earlier confusion. Primitive shaders are definitely, absolutely, 100% not enabled in any current public drivers.

The manual developer API is not ready, and the automatic feature to have the driver invoke them on its own is not enabled.

By the time they finally get it to the performance level they want, Nvidia would have released 2 more generations of their cards already lol.
Primitive shader is something that requires either developers to code extensively for it or AMD does optimization themselves, either way is not that practical. It may end up as a feature only a few developers actually takes advantage over next few years, and quickly forgotten when newer API emerge.
 
Just waiting on the Mantiz-02 and then I'll run some tests...
 

Attachments

  • VEGA_SSD.jpg
    VEGA_SSD.jpg
    888.6 KB · Views: 130
By the time they finally get it to the performance level they want, Nvidia would have released 2 more generations of their cards already lol.
Primitive shader is something that requires either developers to code extensively for it or AMD does optimization themselves, either way is not that practical. It may end up as a feature only a few developers actually takes advantage over next few years, and quickly forgotten when newer API emerge.
Let me quote something:

I think Vega takes the next step towards the goal of a software defined pipeline, where you don't even have a conventional graphics API, the graphics types of vertex, primitive or fragment aren't baked in and inter-stage buffer configuration is determined at run time. I think we both agree this would be neat.

https://forum.beyond3d.com/posts/1998144/

Vega is the bases for every upcoming GCN. Primitive Shaders, NCU, HBCC, DSBR are the core of every upcoming GCN architecture, and bare this in mind. AMD will have to come up with properly working drivers, regardless if they want it or not.

oiK8Bb9.jpg


In current state of drivers Vega is using Vega Native Pipeline in some games, and in some other games uses Fiji Pipeline. Battlefield 1 is one of those Vega "Native" games. That is why we see 20% increase in IPC clock to clock, core to core versus Fiji. There is a lot of performance to still be extracted from Vega.

The biggest issue, and biggest flaw of Vega is... its advancement. To the degree that AMD cannot come up with proper software in time of release.


As I have said. Maybe today we might be buying GTX 1080 level of performance for the same price. But in the end, we might be buying much faster than GTX 1080 Ti GPU, for the price of GTX 1080.

Unfortunately for AMD, it is not a Product that I would buy, in current state of software drivers.
 
So yet another thing that can be blamed on miners and a terrible response to there presence?
Miners dont Buy GTX1080s neither vegas, they follow the money, GTX1060 and RX570 are the most profitable GPUs by 4x scale, I assume these cards are high on AI interested groups as a much cheaper alternative to nVidia, as most AI research (and courses) go arroud TensorFlow and it runs pretty good on the latest radeon gpus..
 
Laughable.

RX Vega 56 is using the same die, the same tech as 64, it is effectively the same die. Using logic proposed by FudZilla - it would loose 200$ on each GPU.


Whole information is pure BS. I have no idea, how around one release of GPUs there can be such enormous amount of false information. First were price increases. Now this. Its madness.

Interposer, and substrate cost is 2.5$ per 100mm2. GPU die costs 80$. Testing, validation, and assembly is 5$ per GPU. TSV's cost 35 American Cents, each.

Funniest part. HBM2 volume is bigger than HBM1 was. Prices are much more competitive. HBM1 stack cost at the time 12$. HBM2 would cost, how much do you believe?
 
got vega 64 on release day, pretty happy with it given the horrific state of the drivers. there's a lot of stuff not enabled and performance only gets better from here :)
[doublepost=1504120312][/doublepost]
Laughable.

RX Vega 56 is using the same die, the same tech as 64, it is effectively the same die. Using logic proposed by FudZilla - it would loose 200$ on each GPU.


Whole information is pure BS. I have no idea, how around one release of GPUs there can be such enormous amount of false information. First were price increases. Now this. Its madness.

Interposer, and substrate cost is 2.5$ per 100mm2. GPU die costs 80$. Testing, validation, and assembly is 5$ per GPU. TSV's cost 35 American Cents, each.

Funniest part. HBM2 volume is bigger than HBM1 was. Prices are much more competitive. HBM1 stack cost at the time 12$. HBM2 would cost, how much do you believe?


Not sure where you're getting your figures from, but i've seen 150 dollars floated for 8 GB of HBM 2, $25 for the interposer. That GPU die is huge. I doubt AMD are making much on vega at the moment, unfortunately, we're in the middle of a RAM price rise (which will also affect HBM cost) too.
 
Not sure where you're getting your figures from, but i've seen 150 dollars floated for 8 GB of HBM 2, $25 for the interposer. That GPU die is huge. I doubt AMD are making much on vega at the moment, unfortunately, we're in the middle of a RAM price rise (which will also affect HBM cost) too.
Well I know where you have taken this information from: Gamers Nexus. They have made a video, and missed everywhere.

Fiji Interposer was costing 20$, because it was 1000 mm2. How come smaller than 700 mm2 can cost more, when it is same tech?

GDDR5 memory costs 4-6$ per stack. Even accounting for 5 times higher prices for HBM2, compared to GDDR5(which is incorrect), it makes it 20-30 per stack, and 60$ total.
 
https://twitter.com/GFXChipTweeter/status/902897048080457728

Raja pretty much confirms, that Vega is a trade off, from gaming perspective. It has to compete with 3 differently sized GPUs, in 3 different markets. The least impressive is obviously, the one which drives the mindshare about the architecture. Future however is interesting.

AMD is loosing the battle but winning the war? Maybe a war with itself? I am all for open solutions, alternatives to CUDA and alternatives to data crunching like Volta, but AMD seems to be loosing from its own self hype. Raja Koduri has VFX, GPU rendering, CG, visual effects all over his twitter page. It feels like an advertisement for The Art Institute, Full Sail, or gnomon. Except for a few case files I don't really understand AMDs end game other than trying to pretend to be relevant as long as possible and not die. Their engineers seem lazy and enjoy pretending they have some relevance to the vfx industry than actually coding and programming their way into it. They just seem like fans of film and not much else. Show us real solutions to farm rendering in Nuke and Maya or how OpenCL is a better solution to CUDA. Other than a bunch of self hype, I don't see any ideas trickling down. Show me the better solution AMD. There needs to be a lot more software level integration. A lot!! I don't see it happening. Stability, software integration, market saturation are all more important than saving 30% on a more inneficent card. If I can only render using blender or some other one off app and not a full pipeline their is no end user advantage. Nvidia and CUDA though more expensive and closed seem like they are worth the money. This is AMDs worst launch ever IMHO. This might be the end of any high end GPU from AMD. Raja's twitter feed is a lot of explaining why his platform isn't dominant and how it will be in the future. Raja should just scrub any VFX related words on his twitter feed since his place in that industry is more of a wish for his chip than a reality. I would love to be proved wrong... tick.. tock.. still waiting.
 
AMD is loosing the battle but winning the war? Maybe a war with itself? I am all for open solutions, alternatives to CUDA and alternatives to data crunching like Volta, but AMD seems to be loosing from its own self hype. Raja Koduri has VFX, GPU rendering, CG, visual effects all over his twitter page. It feels like an advertisement for The Art Institute, Full Sail, or gnomon. Except for a few case files I don't really understand AMDs end game other than trying to pretend to be relevant as long as possible and not die. Their engineers seem lazy and enjoy pretending they have some relevance to the vfx industry than actually coding and programming their way into it. They just seem like fans of film and not much else. Show us real solutions to farm rendering in Nuke and Maya or how OpenCL is a better solution to CUDA. Other than a bunch of self hype, I don't see any ideas trickling down. Show me the better solution AMD. There needs to be a lot more software level integration. A lot!! I don't see it happening. Stability, software integration, market saturation are all more important than saving 30% on a more inneficent card. If I can only render using blender or some other one off app and not a full pipeline their is no end user advantage. Nvidia and CUDA though more expensive and closed seem like they are worth the money. This is AMDs worst launch ever IMHO. This might be the end of any high end GPU from AMD. Raja's twitter feed is a lot of explaining why his platform isn't dominant and how it will be in the future. Raja should just scrub any VFX related words on his twitter feed since his place in that industry is more of a wish for his chip than a reality. I would love to be proved wrong... tick.. tock.. still waiting.
Its devs job to optimize the software for AMD architecture. Even software is creating overhead, when you look at it in grand scheme of things.

Its funny that you rumble about AMD engineers being lazy, where I could say the same thing about the developers, who create the software. But hey, double standards, are apparent in this industry.

You ask about showing you OpenCL, to be better than CUDA?
https://wiki.blender.org/index.php/OpenCL

Timings.png

Blender added simple change for OpenCL kernel: Split Kernel. Portioned into smaller bits, which could be processed faster. The graph is comparing GTX 1060 using CUDA, and RX 480 using OpenCL, with Split Kernel added to the execution path. Here, software is not getting in to the work of hardware, and its true capabilities(actually, there is quite a lot more to be extracted from GCN...).

This is just an example of what you can do with GCN, and OpenCL. If you want to do anything.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.