Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
http://www.digitimes.com/news/a20170908PD210.html

Vega 11 will have HBM2. It appears that manufacturing costs are not that high, like some posters tried to push ;).

Vega 11 is Polaris 10 replacement.

Or it means that if current HBM2 prices continue the margins on Vega 11 based products will be just as bad as Vega 10. The decision to make Vega 10 and 11 HBM based was made a long time ago, meaning that since HBM2 prices are much higher than anticipated, AMD has no choice but to ride it out until prices (hopefully) get down to a level where the margins are better.

I bet being limited to 4 GB of HBM2 memory will certainly hurt the perception of the card if not performance depending on RAM intensive the title is. 6 GB and up is basically expected on midrange cards at this point.
 
Or it means that if current HBM2 prices continue the margins on Vega 11 based products will be just as bad as Vega 10. The decision to make Vega 10 and 11 HBM based was made a long time ago, meaning that since HBM2 prices are much higher than anticipated, AMD has no choice but to ride it out until prices (hopefully) get down to a level where the margins are better.

I bet being limited to 4 GB of HBM2 memory will certainly hurt the perception of the card if not performance depending on RAM intensive the title is. 6 GB and up is basically expected on midrange cards at this point.
Do you believe that AMD would release a GPU with HBM2 on lower margin product if manufacturing costs would be high, that manufacturing of it would be not viable, to compensate the manufacturing costs AND design costs?
 
Do you believe that AMD would release a GPU with HBM2 on lower margin product if manufacturing costs would be high, that manufacturing of it would be not viable, to compensate the manufacturing costs AND design costs?

There is a certain amount of risk that is involved in planning products with such long development cycles and involve new technologies. 12-18 months ago when they would have been finalizing Vega 11's design or even longer ago when they were designing Fiji they envisioned HBM would be a good solution. However, now that HBM2 products are here, it is suffering low availability and high cost, driving down margins and performance in Vega.

HBM has probably hurt each product its been included in. Fiji was too capacity limited, which limited its professional applications. Vega 10 doesn't have enough bandwidth and would have been better off with conventional GDDR5X (or more stacks of HBM2). Vega 11 will probably also be limited by capacity and bandwidth if they have to use a single stack.
 
Did you even read the links I posted? didModifyRange allows the application to explicitly say "I only modified X bytes at offset Y out of the total Z bytes". If the driver can't turn that into a transfer of only the modified bits, then I don't know what to tell you.

I think you're talking about a different case. What I'm talking about is the GPU deferring loading of data until it's actually needed. The CPU isn't psychic. It doesn't know what the shaders on the card will need so it just transfers everything the developer needs to load over blindly. There is a performance penalty for moving data.

Even in the case you're talking about, the developer might be marking data as changed that the GPU doesn't actually need in it's next render pass, leading to the driver sub-optimally passing too much data to the GPU, which will be a performance hit.

HBCC would flip this. Only the GPU pipeline knows what exact resources it will need in a render pass, so instead of pushing over everything you could possibly need and taking a performance hit for doing so, the GPU would pull only the resources it needs.

Done right, even though the GPU is lazy loading resources, it should still be better than just shoving everything over into VRAM every frame, unless the developer optimized that resource loading perfectly.

But if I'm dynamically updating a texture that the user is not looking at why bother moving that data to the card and taking a hit for doing so.
 
Who knows if it was by design but "AMD" has successfully had the Volta thread pad locked! There always has been and always will be at least two GPU factions! AMD v. nVidia for pages and pages. Since the Volta thread has been pad locked I came here to concede. Even though I have a Quadro 4000 mac and a GTX 980ti, Vega seems too strong and fast! See you at the finish line. Go Vega!:p (And yes the "V" I placed on the side of the bottom car stands for Volta)

Vega v. Volta2.png
 
Last edited:
It may be strong, it may be fast, but holy crap is it hungry.

AMD spec sheet lists vega 56 at 210 watts "Typical board power.

http://products.amd.com/en-us/searc.../Radeon™-RX-Vega-Series/Radeon™-RX-Vega-56/92

AMD lists typical board power of vega 64 at 295 watts

http://products.amd.com/en-us/searc.../Radeon™-RX-Vega-Series/Radeon™-RX-Vega-64/93

Vega 64 liquid cooled 345 watts

http://products.amd.com/en-us/searc...ga-Series/Radeon™-RX-Vega-64-Liquid-Cooled/91

Vega frontier edition 300 watts

http://products.amd.com/en-us/searc...on™-Vega-Frontier-Edition/Frontier-Edition/90
 
It may be strong, it may be fast, but holy crap is it hungry.

You do realize the "strong/fast" and "Go Vega!" was in jest. There is no way that "POS" Vega is going to beat the Formula race car (Volta)! And the only thing I concede is that I should have placed a #64 racing number on the side of that Vega. This way "AMD" would recognize what pony they have in the race!
 
  • Like
Reactions: devon807
You do realize the "strong/fast" and "Go Vega!" was in jest. There is no way that "POS" Vega is going to beat the Formula race car (Volta)! And the only thing I concede is that I should have placed a #64 racing number on the side of that Vega. This way "AMD" would recognize what pony they have in the race!

Haha yeah I got the joke. That's why I added my bit. I don't care what cards people use as long as we all get the performance we desire.
 
  • Like
Reactions: OS6-OSX
Who knows if it was by design but "AMD" has successfully had the Volta thread pad locked!
The mods did the right thing - Jensen (Nvidia CEO) recently said that consumer Volta won't be a factor before the end of the year, and the thread wasn't adding much value.

I'm sure that new Volta threads will emerge when serious info about lower-priced Volta chips starts to circulate. Or if GV100 GPUs become more easily available.

(And yes the "V" I placed on the side of the bottom car stands for Volta)
That confused me at first - since the "V" is against a red background and "red" is associated with AMD ("green" with Nvidia). ;)
_

I did like the picture of the Chevrolet Vega - what an EPYC FAIL in the auto world. I remember reading a long time ago that the Vega had similar gas tank placement to the Ford Pinto, and similar risks in rear end collisions. The story said that a consumer group took action against Ford - but ignored the Vega, ostensibly because the engine failure rate of the Vega was so high that there wouldn't be many of them still running by the time the action could be settled.
 
Last edited:
I did like the picture of the Chevrolet Vega - what an EPIC FAIL in the auto world. I remember reading a long time ago that the Vega had similar gas tank placement to the Ford Pinto, and similar risks in rear end collisions. The story said that a consumer group took action against Ford - but ignored the Vega, ostensibly because the engine failure rate of the Vega was so high that there wouldn't be many of them still running by the time the action could be settled.

Bringing-up that VEGA ...certainly put an end to this previously ...very interesting thread...

Any new info on this Vega?
 
Stock Vega 64 working fine on my 5,1 with High Sierra. No boot screen but no need for any driver tomfoolery. Did not stress it out yet, only using 6->8pin adapters right now.

Card will likely get replaced with a RX580 or somesuch in the 5,1.

Google Chrome makes the UI stall with some drawing issues, swapped back to Safari. Safari hated my GTX980 after sleep etc.
 
Stock Vega 64 working fine on my 5,1 with High Sierra. No boot screen but no need for any driver tomfoolery. Did not stress it out yet, only using 6->8pin adapters right now.

Card will likely get replaced with a RX580 or somesuch in the 5,1.

Google Chrome makes the UI stall with some drawing issues, swapped back to Safari. Safari hated my GTX980 after sleep etc.

Yep it has GL issues, and of course no HEVC decoder support in Apple drivers.
 
Stock Vega 64 working fine on my 5,1 with High Sierra. No boot screen but no need for any driver tomfoolery. Did not stress it out yet, only using 6->8pin adapters right now.

Card will likely get replaced with a RX580 or somesuch in the 5,1.

Google Chrome makes the UI stall with some drawing issues, swapped back to Safari. Safari hated my GTX980 after sleep etc.

You've got balls of steel. Be very careful...

Even the RX 580 you have to be very careful with!
 
We are running and configuring Mac Pros with VEGA cards here in Europe at the moment.

All stable and cool, but we obviously take the (insanely high) required power from Pixlas Mods which we do in all Macs.

Pretty much going to fry those Mac Pros if you run two full on 8 pins on those poor and old mini-6 pins


In my opinion performance is just decent, not insane, and probably lower than 2 x RX580.

The point though is that with a single, more powerful GPU, all other three PCIE ports are free, while we would have only two for other stuff than GPUs in a dual GPU setup.

But yeah sucks they are not on like “Titan Xp” level.


People don’t mind paying the price, just give them more performance, AMD!
 
The hypothesis is they won’t add support for GPU decode to force users to buy Kabylake Macs which support decode on CPU. It’s another example of what a bastard penny pinching corporation that screws its loyal customers.


yes. I bought a GTX 960 thinking that I would get HEVC decoding... Apple should use the GPU for decoding.
 
So that’s not unique to Vega. The cMP cannot support HEVC H.265?

In Windows Polaris and Vega decode and encode HEVC perfect, no CPU assistance. Nvidia Pascal manages OK but with more CPU assistance.

High Sierra is a massively epic fail. They said they were serious about graphics this time and they are nowhere on the map. I mean Apple are absolutely nowhere. They are below the consoles.
 
Last edited:
In Windows Polaris and Vega decode and encode HEVC perfect, no CPU assistance. Nvidia Pascal manages OK but with more CPU assistance.

High Sierra is a massively epic fail. They said they were serious about graphics this time and they are nowhere on the map. I mean Apple are absolutely nowhere. They are below the consoles.

Metal 2 is a pretty big step forward. It's not quite on par with DirectX 12, but it's at least finally jumped ahead of where modern OpenGL is. Results of that probably won't turn up for another year as software companies adapt. The Nvidia Metal drivers also continue to be behind the AMD ones.

Dunno what's up with HVEC but the Vega drivers aren't done. Hard to see them shipping the iMac Pro without HVEC acceleration (I don't think the iMac Pro will feature Iris Pro), so seems likely Vega will get HVEC decoding when the drivers are done.
 
  • Like
Reactions: thomasthegps
Metal 2 is a pretty big step forward. It's not quite on par with DirectX 12, but it's at least finally jumped ahead of where modern OpenGL is. Results of that probably won't turn up for another year as software companies adapt. The Nvidia Metal drivers also continue to be behind the AMD ones.

Dunno what's up with HVEC but the Vega drivers aren't done. Hard to see them shipping the iMac Pro without HVEC acceleration (I don't think the iMac Pro will feature Iris Pro), so seems likely Vega will get HVEC decoding when the drivers are done.

Let’s see how Apple treat this “Pro” machine
 
Metal 2 is a pretty big step forward. It's not quite on par with DirectX 12, but it's at least finally jumped ahead of where modern OpenGL is. Results of that probably won't turn up for another year as software companies adapt. The Nvidia Metal drivers also continue to be behind the AMD ones.

Dunno what's up with HVEC but the Vega drivers aren't done. Hard to see them shipping the iMac Pro without HVEC acceleration (I don't think the iMac Pro will feature Iris Pro), so seems likely Vega will get HVEC decoding when the drivers are done.

The real step forward would be when Apple start making insane CPUs / GPUs for the Mac, that are ahead of Intel, AMD or Nvidia and that are easy to code for and support.

To me it looks like Swift and Metal are only some of the tools they will provide, and that iOS is atm the playground where Apple is testing its graphics capabilities.

But yeah, why does the situation have to be so bad, while we get there?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.