If so, Your disappointment will only last for 6 months
what is so difficult to understand? Lack of competition => high prices, slow innovation.
Vega RX - too little, too late, too much power draw.
If so, Your disappointment will only last for 6 months
what is so difficult to understand? Lack of competition => high prices, slow innovation.
Vega RX - too little, too late, too much power draw.
You have said, that their disappointment will lift in 6 months.Me? ... For cMP I'm still happy with 2 x 7970
(The Prince134 "King of Mac" build)
The Vega is low power draw in comparison.
I may build a threadripper system with dual Vegas and try 'em in the Mac.
I just don't understand how the rest applies to this topic or to me.
AMD announces Radeon RX Vega 64 series
AMD Radeon RX Vega series are here. There are currently six Vega cards, two Frontiers, three RX Vega 64s and a cut-down version called RX Vega 56.
Radeon RX 56: Starting with the 56. This model has 3584 Stream Processors, 10.5 TFLOPs computing power, 410 GB/s memory bandwidth (so around 800 MHz clock) and a price tag of 399 USD.
Radeon RX 64: AMD Radeon RX Vega 64 comes in three variants. The cheapest one is 499 USD, this card has 4096 Stream Processors, 8GB HBM2 memory and 484 GB/s bandwidth. The TDP is 295W and it will offer up to 12.66 TFLOPs of power.
The fastest Vega is called RX Vega 64 Liquid Cooled Edition. This model has higher TDP (345W), but also higher clocks up to 1677 MHz. This card will cost you 699 USD.View attachment 710880
Edit: There's also supposed to be an RX Vega 56 Nano, similar in design as the previous Radeon R9 Nano graphics card, however there are no further details available at the moment.
More info: http://radeon.com/RXVega
Videocardz.com has a article where 'AMD Radeon Vega graphics cards explained', in it the AMD Radeon Pro 56 and AMD Radeon Pro 64 are listed. I'm assuming that these are the two that will be going in the iMac Pro. There is however no mention of their power usage at the moment.Look at the power draw, I really wonder what the iMac Pro will deliver.
The GPU in iMac Pro has lower voltage and lower core clock @1.35 GHz, so the power draw will be lower.Look at the power draw, I really wonder what the iMac Pro will deliver.
According to Apple, that will be a 11 TFLOPS GPU inside the iMac Pro. And now, even the Vega 56 (10.5 TFLOPS) already cost 210W to achieve.
ATM, it looks like they will downvolt and downclock the Vega 64 to fit inside the iMac Pro. However, I don't think it's possible to cut down 50% of power (still ~150W), and only sacrifice 10% performance. If it's possible, then IMO, AMD is really stupid to marketing the Vega 64 as a 295W GPU.
If so then proof again AMD just wants to **** with consumers with their big power hungry component heavy models.The Vega nano is 150W, but can it achieve 11TFLOS at that power draw?
I hope for AMD's sake its good enough in compute tasks that they can sell a bunch of these to data centers, because as a gaming chip its uninspiring.
Links?1.2 GHz on a GPU with 4096 GCN cores gives 9.83 TFLOPs.
AMD confirmed that their Draw Stream Binning Rasterizer was completely disabled in Vega FE, and that Vega FE had not enabled the power saving features(Load Balancing).
Are you interested in this architecture, and AMD hardware? Serious question.Links?
Ahh, it was only a few weeks ago where AMD announced the launch of the Radeon Vega Frontier Edition and tests quickly revealed that draw-stream binning rasterization (DBSR) was not enabled on it despite the Vega architecture supporting it. AMD today confirmed that Vega 10 does indeed support it, and that RX Vega SKUs should too. We are not sure yet if there will be a Radeon Pro software driver update to help enable it with the prosumer Vega Frontier Edition at this point.
With Vega, AMD has also devised a new method to deal with the geometry pipeline. This also comes down to effective pixel-shading and rasterization, wherein the new "Primitive Shader" combines both geometry and vertex shader functionality to increase peak throughput by as much as a 100% increase in the native pipeline relative to Fiji. The base improvement immediately helps in the rendering of scenes with millions of polygons where only a fraction is visible on screen at all times- a video game environment is a prime example here with objects in front of others. Implementing primitive shader support comes partly with DX12 and Vulkan, but ultimately falls to the developers again which can end up limiting the applications we actually see. To aid in adoption, AMD has increased the discard rate for the native pipeline by ~2x that of Fiji but, more importantly, as much as a 5x increase via the Vega NGG fast path implementation. Again, there has been no mention of NGG fast path being available any time soon so it is a feature that may end up being theoretical only.
You do not have to rely on the SSG GPUs.So... Jarred Land (president of RED Digital Cinema) posted on his FB page that AMD gave him an alpha version of a Vega-based Pro 2TB SSG GPU to test out. He compares it with the TITAN Xp. It's an insane beast. I hope Apple uses this bad boy, if they stick with AMD! Check it out:
Moving on, perhaps the burning question for many readers now that they have the specifications in hand is expected performance, and this is something of a murky area. AMD has published some performance slides for the Vega 64, but they haven’t taken the time to extensively catalog what they see as the competition for the card and where the RX Vega family fits into that. Instead, what we’ve been told is to expect the Vega 64 to “trade blows” with NVIDIA’s GeForce GTX 1080.
Yes, I'm interested in how spectacularly ATI managed to miss performance targets while consuming more power than most systems can provide.Are you interested in this architecture, and AMD hardware? Serious question.
If so, Im pretty sure this post will be interesting for you:Yes, I'm interested in how spectacularly ATI managed to miss performance targets while consuming more power than most systems can provide.
And if they prematurely introduced something before the drivers and software were ready - I'm curious as to why.
Here are 1000 words to describe the Vega release:
Driver support only may not help this architecture. The applications have to reworked to utilize some of Vega arch features(FP16 for example is a must, the same goes for Primitive Shaders, but this feature also relies on driver support, and its Vulkan and DX12 only).From anandtech:
Obviously this expectation is with whatever driver voodoo AMD is bringing(or not bringing) with the finished Vega RX drivers.
P.S. What performance targets you say that AMD missed...?
Im sure they were talking about performance per watt from Vega Nano.Performance per watt promises for one.
Im sure they were talking about performance per watt from Vega Nano.
P.S. Gaming is different story about efficiency, and professional workloads are different story. I thought this is professional forum, but once again, we are only talking about gaming performance.
Links?Heh. Didn't both the 1080 and 980 have similar issues at launch? I remember all the horrible compute scores, and the cries of "The drivers are still early!"
Worked out so badly for Nvidia....
You should apply to be the next Trump White House communications director. Your belief in alternative facts makes you a shoe-in.I have posted previously: in perfect world, with properly optimized software, 1.6 GHz, 512 GB/s Vega should be two times faster than Fiji.
Im sure they were talking about performance per watt from Vega Nano.
P.S. Gaming is different story about efficiency, and professional workloads are different story. I thought this is professional forum, but once again, we are only talking about gaming performance.