Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The more AMD makes Vega GPU's, the smaller the design cost is per GPU. No one knows for sure how much the R&D cost per GPU is going to be until AMD stops making them.

Same with Tesla, they're not losing money with every car... every car makes them profit, but the numbers sold have been too low to cover the costs. So, the more they sell cars, the less deficit they make and finally they'll cross the break-even line someday... but not yet.
 
The more AMD makes Vega GPU's, the smaller the design cost per GPU is. No one knows for sure how much the R&D cost per GPU is going to be until AMD stops making them.

Same with Tesla, they're not losing money with every car... every car makes them profit, but the numbers sold have been to low to cover the costs. So, the more they sell cars, the less deficit they make and finally they'll cross the break-even line someday... but not yet.
Tesla till 2015, I think, was loosing 7 000 USD on each car they sold.
 
Tesla till 2015, I think, was loosing 7 000 USD on each car they sold.
Yes, but that number was calculated at the end of the financial year. If they'd sold, for instance, 300.000 cars more, they'd lost just 2000 USD per car and with 5 etc. (that number I pulled out of my hat... just to give an example)
 
Last edited:
Let me give you one example why. If AMD is selling each GPU, that cost the company 280 mln $ to design, 600$ to manufacture, for as low as 399$ - that means they will loose not 100$ on each GPU. AMD is going to lose almost half a billion US dollars.

The amount of stupidity in that post is beyond me. I cannot even comprehend that anyone of you can believe in it.

The point I keep trying to make is that it's about more than simply the raw cost to manufacture a Vega GPU. There are fixed costs that must be in the hundreds of millions of dollars for architecture design, turning that architecture design into Verilog, building the shader compiler and driver software, product advertising and marketing, and so on. All those costs need to be recovered before Vega (and AMD as a whole) becomes profitable. If Vega sells poorly, then AMD will likely never recover all the costs. If it sells very well for a long time, they will recover the costs and then start printing money (i.e. this is where your math of it costs $X to make and they can sell it for $600).

So yeah, there's a good chance AMD will lose hundreds of millions of dollars on Vega. They've been losing hundreds of millions of dollars a quarter for many years now, so why is this a surprise to you?

But sure, just ignore what I'm trying to say and call me stupid, that'll surely make AMD profitable and the Vega launch successful.

Edit: Oh, another cost factor is all the scrap from HBM2 parts that don't work. The HBM2 transition has been hard for both AMD and NVIDIA.
 
The point I keep trying to make is that it's about more than simply the raw cost to manufacture a Vega GPU. There are fixed costs that must be in the hundreds of millions of dollars for architecture design, turning that architecture design into Verilog, building the shader compiler and driver software, product advertising and marketing, and so on. All those costs need to be recovered before Vega (and AMD as a whole) becomes profitable. If Vega sells poorly, then AMD will likely never recover all the costs. If it sells very well for a long time, they will recover the costs and then start printing money (i.e. this is where your math of it costs $X to make and they can sell it for $600).

So yeah, there's a good chance AMD will lose hundreds of millions of dollars on Vega. They've been losing hundreds of millions of dollars a quarter for many years now, so why is this a surprise to you?

But sure, just ignore what I'm trying to say and call me stupid, that'll surely make AMD profitable and the Vega launch successful.
The article stated that on each Vega 64 sold AMD losing 100$ because the manufacturing costs are so high. You, when proven wrong, are trying to spin it your way.

Good luck. Deal with reality. AMD is not losing 100$ on each GPU sold. End of the story.

Design costs, are always calculated in the price of the GPU. How so? Let me give you an example. Intel has payed 80 mln $ each year for designing the CannonLake-S CPUs. And you know what? They have scrapped them. Why? Because manufacturing process would NEVER make the money back.

If AMD would know that this GPU would never make the money back - they would never release it. It is this bloody simple. I do not know how can you stick with this, when you are blatantly wrong and spread FUD.

P.S. AMD was losing money because they have WSA, and have to buy wafers from GLoFo, regardless if they will sell them or not. Otherwise - they will have financial penalties from the deal with GloFo. 14 nm process, helps AMD to get the money back, because they can sell the designs to people, and their situation is much better than it was pre GloFo-Samsung deal.

And second reason why they were losing money - Bulldozer. End of the story.
 
The article stated that on each Vega 64 sold AMD losing 100$ because the manufacturing costs are so high. You, when proven wrong, are trying to spin it your way.

Good luck. Deal with reality. AMD is not losing 100$ on each GPU sold. End of the story.

Design costs, are always calculated in the price of the GPU. How so? Let me give you an example. Intel has payed 80 mln $ each year for designing the CannonLake-S CPUs. And you know what? They have scrapped them. Why? Because manufacturing process would NEVER make the money back.

If AMD would know that this GPU would never make the money back - they would never release it. It is this bloody simple. I do not know how can you stick with this, when you are blatantly wrong and spread FUD.

P.S. AMD was losing money because they have WSA, and have to buy wafers from GLoFo, regardless if they will sell them or not. Otherwise - they will have financial penalties from the deal with GloFo. 14 nm process, helps AMD to get the money back, because they can sell the designs to people, and their situation is much better than it was pre GloFo-Samsung deal.

And second reason why they were losing money - Bulldozer. End of the story.

Your table showed Fiji's cost, not Vega. Even that table is likely guesswork. The biggest driver of Vega's cost is likely HBM2 memory, not present on any other consumer GPU. Thats not to mention that this is the largest chip that GlobalFoundries has every made, so that may add to the cost as well.
 
Your table showed Fiji's cost, not Vega. Even that table is likely guesswork. The biggest driver of Vega's cost is likely HBM2 memory, not present on any other consumer GPU. Thats not to mention that this is the largest chip that GlobalFoundries has every made, so that may add to the cost as well.
Cost of any GPU part is driven by volume. HBM1 was lower volume, than HBM2, because BOTH AMD and Nvidia are using HBM2 tech, and HBM1 was used only by AMD. Secondly. HBM1 was more expensive because it was made only by SK Hynix. This time we have two vendors which manufacture this type of memory. SK Hynix, and Samsung, and both HBM2 vendors are present in AMD GPUs.

Once again. For the article to be correct, and AMD losing money on each GPU - each stack would have to cost north of 200$. More than 50 times more, than current prices of GDDR5.

If Ryzen CPU costs AMD to make 40$, at best, for 200 mm2 die, how much costs AMD to make 484 mm2 die sized chip? And we are talking about best yielding process in the industry, with best semiconductor engineers: IBM's. IMO Die itself costs 70-80$ in 300 mm wafer.
 
Cost of any GPU part is driven by volume. HBM1 was lower volume, than HBM2, because BOTH AMD and Nvidia are using HBM2 tech, and HBM1 was used only by AMD. Secondly. HBM1 was more expensive because it was made only by SK Hynix. This time we have two vendors which manufacture this type of memory. SK Hynix, and Samsung, and both HBM2 vendors are present in AMD GPUs.

According to rumors they are only sourcing HBM2 from Samsung at the moment. Obviously this gives Samsung the power to charge AMD lots of money, especially if they can't keep up with demand.

Once again. For the article to be correct, and AMD losing money on each GPU - each stack would have to cost north of 200$. More than 50 times more, than current prices of GDDR5.

That assumes all your other estimations are correct. Which obviously are questionable since you don't have any insider insight into AMD's real costs.

If Ryzen CPU costs AMD to make 40$, at best, for 200 mm2 die, how much costs AMD to make 484 mm2 die sized chip? And we are talking about best yielding process in the industry, with best semiconductor engineers: IBM's. IMO Die itself costs 70-80$ in 300 mm wafer.

Defect rate and cost do not scale linearly with the area of the die. It scales exponentially. Die cost is not an opinion, its a fact that only AMD knows. Additionally, Ryzen is targeted at a much bigger market than an enthusiast GPU and has different economies of scale. Its not reasonable to try and scale Vega's cost based on Ryzen.
 
The article stated that on each Vega 64 sold AMD losing 100$ because the manufacturing costs are so high. You, when proven wrong, are trying to spin it your way.

Good luck. Deal with reality. AMD is not losing 100$ on each GPU sold. End of the story.

The article said what it said, it has nothing to do with me. My stance for months now has been that AMD will be forced to sell Vega for significantly less than they want to, and they will struggle to make money from it as a result. Feel free to go back and check my posting history to confirm this if you like.

https://forums.macrumors.com/thread...s-announcements.1975249/page-65#post-24545269
https://forums.macrumors.com/thread...s-announcements.1975249/page-65#post-24545316
https://forums.macrumors.com/thread...s-announcements.1975249/page-65#post-24545428
https://forums.macrumors.com/thread...s-announcements.1975249/page-65#post-24545536
https://forums.macrumors.com/thread...s-announcements.1975249/page-65#post-24545721

As I said, let's just wait for AMD to announce their financial data for the next few quarters and we'll see how profitable they are selling Vega at the current prices. You can keep believing you have hard proof about how much money AMD has spent on Vega and that you've convincingly proven everyone else on the internet wrong. I don't really care about specific numbers like "$100 per GPU", but I maintain that AMD would very much like to be able to sell Vega for significantly more than they are, but given the terrible competitive performance they've been forced to sell it for 400/500 so that they can actually move some units.
 
  • Like
Reactions: TheStork
The but given the terrible competitive performance they've been forced to sell it for 400/500 so that they can actually move some units.
Perhaps ATI is investing in companies that make big ATX power supplies - so that what they're losing on each Vega GPU sale is offset by selling large power supplies to feed the voracious appetite of the Vega.
 
  • Like
Reactions: h9826790
Perhaps ATI is investing in companies that make big ATX power supplies - so that what they're losing on each Vega GPU sale is offset by selling large power supplies to feed the voracious appetite of the Vega.
Remember that a semi-passive PSU will operate fanless up to about 40% load.
 
The article said what it said, it has nothing to do with me. My stance for months now has been that AMD will be forced to sell Vega for significantly less than they want to, and they will struggle to make money from it as a result. Feel free to go back and check my posting history to confirm this if you like.

https://forums.macrumors.com/thread...s-announcements.1975249/page-65#post-24545269
https://forums.macrumors.com/thread...s-announcements.1975249/page-65#post-24545316
https://forums.macrumors.com/thread...s-announcements.1975249/page-65#post-24545428
https://forums.macrumors.com/thread...s-announcements.1975249/page-65#post-24545536
https://forums.macrumors.com/thread...s-announcements.1975249/page-65#post-24545721

As I said, let's just wait for AMD to announce their financial data for the next few quarters and we'll see how profitable they are selling Vega at the current prices. You can keep believing you have hard proof about how much money AMD has spent on Vega and that you've convincingly proven everyone else on the internet wrong. I don't really care about specific numbers like "$100 per GPU", but I maintain that AMD would very much like to be able to sell Vega for significantly more than they are, but given the terrible competitive performance they've been forced to sell it for 400/500 so that they can actually move some units.

Right, regardless of what the actual margin is, AMD certainly wishes that Vega 64 was competing with the higher end GTX 1080 Ti, not its smaller sibling the 1080 so that they could charge more. Especially since the cost to make Vega likely exceeds the 1080 Ti.
 
According to rumors they are only sourcing HBM2 from Samsung at the moment. Obviously this gives Samsung the power to charge AMD lots of money, especially if they can't keep up with demand.
AMD is using BOTH vendors as a source for their HBM2 stacks. I think the faster HBM2 stacks are coming from Samsung, and slower ones, on Vega 56 are coming from SK Hynix.
vega1.jpg

On the left: Vega 64 with Samsung chips.
In the middle: Vega 56 with SK Hynix HBM2.

What is SK Hynix saying about prices of HBM2?

They are 2.5 times higher than HBM1.
http://www.anandtech.com/show/11690/sk-hynix-customers-willing-to-pay-more-for-hbm2-memory

So the memory system will be just 12$ more expensive than Fiji was. But they are able to cut cost on the die size of the GPU, and on the Interposer which is much smaller than Fiji's.

AMD was selling Fiji at 300$ price tag, and still making money on it.
With Vega, they will make money on each GPU.
 
I have no idea why you think that debatable factoid is relevant to my post.
It means that you would need at least something like a 1600W PSU for it to operate silently at full load of a Threadripper Vega 64 PC.
 
Last edited:
Its devs job to optimize the software for AMD architecture. Even software is creating overhead, when you look at it in grand scheme of things.

Its funny that you rumble about AMD engineers being lazy, where I could say the same thing about the developers, who create the software. But hey, double standards, are apparent in this industry.

You ask about showing you OpenCL, to be better than CUDA?
https://wiki.blender.org/index.php/OpenCL

Timings.png

Blender added simple change for OpenCL kernel: Split Kernel. Portioned into smaller bits, which could be processed faster. The graph is comparing GTX 1060 using CUDA, and RX 480 using OpenCL, with Split Kernel added to the execution path. Here, software is not getting in to the work of hardware, and its true capabilities(actually, there is quite a lot more to be extracted from GCN...).

This is just an example of what you can do with GCN, and OpenCL. If you want to do anything.

I agree about developers, it is kinda a sad state of affairs.. The major software companies emphasis is on monthly subscriptions and expanding or securing their marketshare and not necessarily giving the greatest power to the people. Open Source has it's benefits but integrating programs like Blender into a pipeline are very difficult.. Lighting, rigging/animating, rendering applications spread across a lot of different developers and vendors. Not just Autodesk.. and every artist has a "go to" program for doing one of these tasks that has to be integrated into a pipeline. I specifically stated BLENDER, because thought it is great for a free tool, it is somewhat hacked together... each developer wrote a diff part, each interface is diff, nothing is really that co-hesive. It is a hard program to use in mainstream VFX.. .
I am confused by Raja Koduri, he just seems like a super fan of VFX flying around on the AMD dollar holding light sabers and showing a few workstations with Threadripper and VEGA GPU. It is pretty disconnected from the down and dirty constantly grinding VFX industry that is the reality of every heavy VFX film and tv show made....
 
I agree about developers, it is kinda a sad state of affairs.. The major software companies emphasis is on monthly subscriptions and expanding or securing their marketshare and not necessarily giving the greatest power to the people. Open Source has it's benefits but integrating programs like Blender into a pipeline are very difficult.. Lighting, rigging/animating, rendering applications spread across a lot of different developers and vendors. Not just Autodesk.. and every artist has a "go to" program for doing one of these tasks that has to be integrated into a pipeline. I specifically stated BLENDER, because thought it is great for a free tool, it is somewhat hacked together... each developer wrote a diff part, each interface is diff, nothing is really that co-hesive. It is a hard program to use in mainstream VFX.. .
I am confused by Raja Koduri, he just seems like a super fan of VFX flying around on the AMD dollar holding light sabers and showing a few workstations with Threadripper and VEGA GPU. It is pretty disconnected from the down and dirty constantly grinding VFX industry that is the reality of every heavy VFX film and tv show made....
AAAAAAND, Baahubali is the Biggest movie franchise in Bollywood. Who do you think they hired on the VFX side? NVidia or AMD?

Of course AMD. It was one of biggest AMD wins in VFX scene in few last years, and they promoted it hard. Even me - person who is completely not interested in Bollywood films have heard about it - this was the degree of marketing machine that was happening. The same went for latest Alien iteration. AMD provided the branding for marketing and hardware for the job of VFX.

I seriously suggest to you - start searching for solutions for AMD. Start demanding from developers properly optimized software. There is enough tools, and documentation about GCN to optimize for it. Its the best documented GPU architecture on the planet. The only reason why devs are not doing this is politics, or pure lazyness.


Edit. Sticking to the thread. I have found one graph on another forum.
acVk5SF.png

What you see here is that Draw Stream Binning Rasterizer being turned off.

So essentially, all of new graphics Vega Features: Draw Stream Binning Rasterizer, Primitive Shaders, High Bandwidth Cache Controller, and Intelligent Workload Distributor, which is tied with DSBR, and Primitive Shaders - all are turned off.

You basically pay 499$ for overclocked Fiji.

When AMD will enable all of those features - it will be interesting to come back to reviews, and read impressions of everybody who thought this architecture is a failure.
 
Last edited:
So essentially, all of new graphics Vega Features: Draw Stream Binning Rasterizer, Primitive Shaders, High Bandwidth Cache Controller, and Intelligent Workload Distributor, which is tied with DSBR, and Primitive Shaders - all are turned off.

You basically pay 499$ for overclocked Fiji.

When AMD will enable all of those features - it will be interesting to come back to reviews, and read impressions of everybody who thought this architecture is a failure.

And by then, consumer Volta will be out and AMD will be a whole GPU generation behind yet again. I don't understand why you think it's acceptable for AMD to release a product where all the new hardware features of that product are disabled, with no timeline for when they will be enabled.
 
And by then, consumer Volta will be out and AMD will be a whole GPU generation behind yet again. I don't understand why you think it's acceptable for AMD to release a product where all the new hardware features of that product are disabled, with no timeline for when they will be enabled.
The GPU that will be faster than Vega(presumably...) will be GV102. And that is not on the horizon until Q3 next year. So AMD still may have 9-10 months of comfort.

P.S. You are moving the goalpost ;).
 
The GPU that will be faster than Vega(presumably...) will be GV102. And that is not on the horizon until Q3 next year. So AMD still may have 9-10 months of comfort.

P.S. You are moving the goalpost ;).

What goalpost? My previous post was a stand-alone comment. I get that you're upset about how terrible Vega is right now, but please try to stay on-topic.

Do you really think that AMD will be able to fix their drivers in the next month? I really, really don't believe that to be possible based on their track record, I think it's much more realistic to expect another 3-6 months before all these features are unlocked and stable. Then, it'll take developers time to actually start using those features, so at a minimum I'd expect another few months to release a patch to existing games, or much longer for new games/apps. When you factor in all of that, consumer Volta in Q3 2018 doesn't seem that far off.

GV102 will likely be 50-100% faster than GP102, which is already absolutely crushing Vega. Have you seen any comparisons between Vega 64 and the 1080 Ti? No? That's because there's no point in even putting them on the same graph. If, and it's a big if, AMD is able to get their perf up to 1080 Ti levels, NVIDIA will likely have Volta-based parts available. That's all I'm saying. I'm sure you'll respond with theoretical microbenchmark numbers showing how great Primitive Shaders will be and all that, but we've seen how these theoretical values like raw TFLOPs translate into real application performance, particularly for games.
 
What goalpost? My previous post was a stand-alone comment. I get that you're upset about how terrible Vega is right now, but please try to stay on-topic.

Do you really think that AMD will be able to fix their drivers in the next month? I really, really don't believe that to be possible based on their track record, I think it's much more realistic to expect another 3-6 months before all these features are unlocked and stable. Then, it'll take developers time to actually start using those features, so at a minimum I'd expect another few months to release a patch to existing games, or much longer for new games/apps. When you factor in all of that, consumer Volta in Q3 2018 doesn't seem that far off.

GV102 will likely be 50-100% faster than GP102, which is already absolutely crushing Vega. Have you seen any comparisons between Vega 64 and the 1080 Ti? No? That's because there's no point in even putting them on the same graph. If, and it's a big if, AMD is able to get their perf up to 1080 Ti levels, NVIDIA will likely have Volta-based parts available. That's all I'm saying. I'm sure you'll respond with theoretical microbenchmark numbers showing how great Primitive Shaders will be and all that, but we've seen how these theoretical values like raw TFLOPs translate into real application performance, particularly for games.
It appears that not me is the one who is not sticking to the topic of Vega.

There is nothing to fix in drivers of Vega. Vega has different pipeline than previous versions of GCN, in graphics. The features that are doing the biggest difference are switched off. DSBR will provide 10% increase in performance across the board. HBCC will increase minimum framerates, and Primitive Shaders - god only knows how much will increase performance, but it will depend on situation of course. Not all games are geometry heavy. Those which are - will see biggest benefit. The feature that nobody is talking about, and can provide massive difference across the board is Intelligent Workload Distributor, because it is game and situation agnostic. Adding IWD to Fiji(Fury X) would make it on the same level as GTX 1080, and clocking this GPU on the levels of Vega - it would make that GPU 10-15% faster than GTX 1080 Ti. Let me remind you that AMD has higher IPC, but struggles to feed the cores with work. Intelligent Workload Distributor is designed to lift this problem.

So yes, pretty meaningful differences we will see when AMD will provide new drivers for Vega. When it will happen? I think AMD twice a year is providing huge updates to the drivers. And In Q4, slightly before iMac Pro with Vega release is the timeline of release for this driver.

There was a merit in that "Poor Volta" Marketing.

Yes, I have seen GP102 comparisons. I have seen also comparisons in which Vega 64 is faster than GTX 1080 Ti(yes, there are situations like this...).

And lastly. If GV104 is 20% faster than GP102, I would not expect miracles from GV102. All of those GPUs may be CPU limited, and the reason why low-level APIs are a blessing.
 
There is some evidence that one of the biggest limiting factors in Vega is memory bandwidth. Overclocked Vega 56 is very close in performance to Vega 64, despite the number of CUs being cut down. Furthermore, Vega has less bandwidth than Fiji, despite having 50% faster clock speeds. If this is one of the major bottlenecks, then no amount of software tweaks will make it significantly faster.
 
The memory bandwidth is "effective". Its combination of memory controller, feeding the cores with data(or rather inability), disabled features.

When AMD will come up with new drivers - the effective memory bandwidth also will go up.
 
And by then, consumer Volta will be out and AMD will be a whole GPU generation behind yet again. I don't understand why you think it's acceptable for AMD to release a product where all the new hardware features of that product are disabled, with no timeline for when they will be enabled.

Which assumes Nvidia will have fully optimized drivers for a brand new architecture on day 1...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.