Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
what is so difficult to understand? Lack of competition => high prices, slow innovation.
Vega RX - too little, too late, too much power draw.

Me? ... For cMP I'm still happy with 2 x 7970

(The Prince134 "King of Mac" build)

The Vega is low power draw in comparison.

I may build a threadripper system with dual Vegas and try 'em in the Mac.

I just don't understand how the rest applies to this topic or to me.
 
Last edited:
Me? ... For cMP I'm still happy with 2 x 7970

(The Prince134 "King of Mac" build)

The Vega is low power draw in comparison.

I may build a threadripper system with dual Vegas and try 'em in the Mac.

I just don't understand how the rest applies to this topic or to me.
You have said, that their disappointment will lift in 6 months.

There is a certain very vocal group here that is NOT disappointed with performance of Vega, they are happy about it. You should've learned that by this moment if you read their posts about AMD over past years on this forum.


On the other hand. Im not sure it will take 6 months to improve performance of Vega.
 
AMD announces Radeon RX Vega 64 series

AMD Radeon RX Vega series are here. There are currently six Vega cards, two Frontiers, three RX Vega 64s and a cut-down version called RX Vega 56.

Radeon RX 56: Starting with the 56. This model has 3584 Stream Processors, 10.5 TFLOPs computing power, 410 GB/s memory bandwidth (so around 800 MHz clock) and a price tag of 399 USD.

Radeon RX 64: AMD Radeon RX Vega 64 comes in three variants. The cheapest one is 499 USD, this card has 4096 Stream Processors, 8GB HBM2 memory and 484 GB/s bandwidth. The TDP is 295W and it will offer up to 12.66 TFLOPs of power.

The fastest Vega is called RX Vega 64 Liquid Cooled Edition. This model has higher TDP (345W), but also higher clocks up to 1677 MHz. This card will cost you 699 USD.View attachment 710880

Edit: There's also supposed to be an RX Vega 56 Nano, similar in design as the previous Radeon R9 Nano graphics card, however there are no further details available at the moment.

More info: http://radeon.com/RXVega

Look at the power draw, I really wonder what the iMac Pro will deliver.
 
"Also its almost certain Nvidia will be delivering its consumer Volta chip with GTX 1080 Ti performance at a $500 price point within the next 6 months."

He will be happy in six months ...Yes?
 
According to Apple, that will be a 11 TFLOPS GPU inside the iMac Pro. And now, even the Vega 56 (10.5 TFLOPS) already cost 210W to achieve.

ATM, it looks like they will downvolt and downclock the Vega 64 to fit inside the iMac Pro. However, I don't think it's possible to cut down 50% of power (still ~150W), and only sacrifice 10% performance. If it's possible, then IMO, AMD is really stupid to marketing the Vega 64 as a 295W GPU.
 
  • Like
Reactions: tuxon86
Look at the power draw, I really wonder what the iMac Pro will deliver.
The GPU in iMac Pro has lower voltage and lower core clock @1.35 GHz, so the power draw will be lower.

According to Apple, that will be a 11 TFLOPS GPU inside the iMac Pro. And now, even the Vega 56 (10.5 TFLOPS) already cost 210W to achieve.

ATM, it looks like they will downvolt and downclock the Vega 64 to fit inside the iMac Pro. However, I don't think it's possible to cut down 50% of power (still ~150W), and only sacrifice 10% performance. If it's possible, then IMO, AMD is really stupid to marketing the Vega 64 as a 295W GPU.

Vega Nano has 1.2 GHz... At least Engineering Samples have had that clock...
 
The Vega nano is 150W, but can it achieve 11TFLOS at that power draw?
If so then proof again AMD just wants to **** with consumers with their big power hungry component heavy models.

On the plus side for Vega, if High Sierra has plug and play support for all these cards you can use one now in your cMP and then install it in a future MP 7.1

Unless Apple keeps producing more crippled or proprietary bull crap.
 
1.2 GHz on a GPU with 4096 GCN cores gives 9.83 TFLOPs.


AMD confirmed that their Draw Stream Binning Rasterizer was completely disabled in Vega FE, and that Vega FE had not enabled the power saving features(Load Balancing).
 
Last edited:
I hope for AMD's sake its good enough in compute tasks that they can sell a bunch of these to data centers, because as a gaming chip its uninspiring.
1.2 GHz on a GPU with 4096 GCN cores gives 9.83 TFLOPs.


AMD confirmed that their Draw Stream Binning Rasterizer was completely disabled in Vega FE, and that Vega FE had not enabled the power saving features(Load Balancing).
Links?
 
Stacc, wrong link.

https://www.techpowerup.com/reviews/AMD/Vega_Microarchitecture_Technical_Overview/6.html

Ahh, it was only a few weeks ago where AMD announced the launch of the Radeon Vega Frontier Edition and tests quickly revealed that draw-stream binning rasterization (DBSR) was not enabled on it despite the Vega architecture supporting it. AMD today confirmed that Vega 10 does indeed support it, and that RX Vega SKUs should too. We are not sure yet if there will be a Radeon Pro software driver update to help enable it with the prosumer Vega Frontier Edition at this point.

Whole presentation is giving a lot of information, on Vega architecture.

AMD blatantly is saying: we have released unfinished product, expect that Drivers might bring performance gains, but those are also relying on developers:

https://www.techpowerup.com/reviews/AMD/Vega_Microarchitecture_Technical_Overview/4.html
With Vega, AMD has also devised a new method to deal with the geometry pipeline. This also comes down to effective pixel-shading and rasterization, wherein the new "Primitive Shader" combines both geometry and vertex shader functionality to increase peak throughput by as much as a 100% increase in the native pipeline relative to Fiji. The base improvement immediately helps in the rendering of scenes with millions of polygons where only a fraction is visible on screen at all times- a video game environment is a prime example here with objects in front of others. Implementing primitive shader support comes partly with DX12 and Vulkan, but ultimately falls to the developers again which can end up limiting the applications we actually see. To aid in adoption, AMD has increased the discard rate for the native pipeline by ~2x that of Fiji but, more importantly, as much as a 5x increase via the Vega NGG fast path implementation. Again, there has been no mention of NGG fast path being available any time soon so it is a feature that may end up being theoretical only.

This pretty much explains why Vega is performing as it is in current state of software.

I think one of first games that will adopt Vega features will be Overwatch. AMD started very close collaboration lately with Blizzard, on their games, and improved performance quite a lot, lately.

Within 2 months Overwatch went on RX 460 from 70 FPS in 1080p Ultra preset to 84 FPS average, and RX 560, broke 106 FPS average framerate, and came close to GTX 1050 Ti.

So... Jarred Land (president of RED Digital Cinema) posted on his FB page that AMD gave him an alpha version of a Vega-based Pro 2TB SSG GPU to test out. He compares it with the TITAN Xp. It's an insane beast. I hope Apple uses this bad boy, if they stick with AMD! Check it out:
Q5ihhZb.png
You do not have to rely on the SSG GPUs.
slides-08.jpg

"Use local video memory as a last level cache for system memory and storage".

That is essentially how HBCC works. Of course - you still need API, and the connection will go over PCIe to the GPU, instead - directly to the GPU.
 
Last edited:
From anandtech:

Moving on, perhaps the burning question for many readers now that they have the specifications in hand is expected performance, and this is something of a murky area. AMD has published some performance slides for the Vega 64, but they haven’t taken the time to extensively catalog what they see as the competition for the card and where the RX Vega family fits into that. Instead, what we’ve been told is to expect the Vega 64 to “trade blows” with NVIDIA’s GeForce GTX 1080.

Obviously this expectation is with whatever driver voodoo AMD is bringing(or not bringing) with the finished Vega RX drivers.
 
Are you interested in this architecture, and AMD hardware? Serious question.
Yes, I'm interested in how spectacularly ATI managed to miss performance targets while consuming more power than most systems can provide.

And if they prematurely introduced something before the drivers and software were ready - I'm curious as to why.

Here are 1000 words to describe the Vega release:

323addb677ebf610dd93eaea43f2e0ca[1].jpg
 
  • Like
Reactions: tuxon86
Heh. Didn't both the 1080 and 980 have similar issues at launch? I remember all the horrible compute scores, and the cries of "The drivers are still early!"

Worked out so badly for Nvidia....
 
  • Like
Reactions: ssgbryan
Yes, I'm interested in how spectacularly ATI managed to miss performance targets while consuming more power than most systems can provide.

And if they prematurely introduced something before the drivers and software were ready - I'm curious as to why.

Here are 1000 words to describe the Vega release:
If so, Im pretty sure this post will be interesting for you:

https://forums.macrumors.com/threads/the-vega-rx-thread-rumors-and-info.2056361/page-2#post-24846631


I have posted previously: in perfect world, with properly optimized software, 1.6 GHz, 512 GB/s Vega should be two times faster than Fiji.

So far, it appears that Vega with the same memory bandwidth, and 45% higher clock speed it is up to 25% faster than Fiji. So definitely something is holding the architecture up. But also definitely - its not the hardware. Im sure that after reading my post describing Vega architecture, and comparing it to the technical details AMD released and Techpowerup reposted you would understand - why I am saying this.

P.S. What performance targets you say that AMD missed...?

From anandtech:



Obviously this expectation is with whatever driver voodoo AMD is bringing(or not bringing) with the finished Vega RX drivers.
Driver support only may not help this architecture. The applications have to reworked to utilize some of Vega arch features(FP16 for example is a must, the same goes for Primitive Shaders, but this feature also relies on driver support, and its Vulkan and DX12 only).

So don't expect that Vega will magically happen to be great in DX11 games. Vulkan and DX12 - that is completely different story.
 
Performance per watt promises for one.
Im sure they were talking about performance per watt from Vega Nano.

P.S. Gaming is different story about efficiency, and professional workloads are different story. I thought this is professional forum, but once again, we are only talking about gaming performance.
 
  • Like
Reactions: ssgbryan
Im sure they were talking about performance per watt from Vega Nano.

P.S. Gaming is different story about efficiency, and professional workloads are different story. I thought this is professional forum, but once again, we are only talking about gaming performance.

What makes you think this is a "professional" forum? It's to discuss the Mac Pro, which is a range of computers sold by Apple (that used to have a PCIe slot and could use a wide variety of graphics cards, official or otherwise). People use Mac Pros for all kinds of things, including playing games.
 
  • Like
Reactions: tuxon86
Heh. Didn't both the 1080 and 980 have similar issues at launch? I remember all the horrible compute scores, and the cries of "The drivers are still early!"

Worked out so badly for Nvidia....
Links?

I don't remember that, and I have dozens of them. Nvidia's been shipping drivers and CUDA ready for the latest GPUSs for quite some time.

We got pretty much immediate speedups from upgrading - just upgrade CUDA to the new version that supports Maxwell/Pascal and things are faster immediately. (Note that my focus is compute-only - no games, and no 3D graphics.)

If you have apps that don't use the new features - no speedups until the apps are updated. That can be fixed by a strong 3rd party early adopter program to help apps be ready on launch date. Nvidia mostly seems to have that - CUDA 8 was out when Pascal shipped for real. If your apps use the CUDA libraries - they're mostly ready on day 1. (Many GPU advances enhance existing APIs without code changes. Using new APIs do need some app code changes.)

CUDA 9 with Volta support is towards the end of Beta, and with GV100-based cards showing up should soon CUDA 9 be public.

ATI should be embarrassed for coming out with a 345 watt liquid-cooled card - and saying that "the software/drivers aren't ready".
[doublepost=1501533965][/doublepost]
I have posted previously: in perfect world, with properly optimized software, 1.6 GHz, 512 GB/s Vega should be two times faster than Fiji.
You should apply to be the next Trump White House communications director. Your belief in alternative facts makes you a shoe-in.
 
  • Like
Reactions: tuxon86
Im sure they were talking about performance per watt from Vega Nano.

P.S. Gaming is different story about efficiency, and professional workloads are different story. I thought this is professional forum, but once again, we are only talking about gaming performance.

You keep moving the goal posts here. First, graphics workloads are professional. Apple is advertising the iMac Pro for VR, featuring a Vega graphics chip. There are many other "professional" workloads that stress graphics performance.

Second, Vega FE/Vega RX 64 with a TDP of 295 W with GTX 1080 graphics performance is pathetic efficiency wise. Sure, a downclocked chip may slightly improve this. But even if there is zero performance loss between the 295 W RX 64 and the 210 W Vega Nano, it will still be less efficient than the 180 W GTX 1080.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.