Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
http://ve.ga

Information about Vega arch. coming in 3 days.

Edit: Someone on reddit spotted this in the film:
bVobYT.jpg


I have to say, I laughed my ass off.
 
Last edited:
https://www.forum-3dcenter.org/vbulletin/showthread.php?p=11252606#post11252606

Someone spotted this in the site promoting the event.

To wrap it up:

  • 8x Capacity/Stack
  • 4x Power Efficiency
  • 2x Bandwith per Pin
  • 2x Peak Throughput per Clock
  • High Bandwith Cache + Controller
  • Next Generation Compute Engine
  • Vega NCU
  • Next Generation Pixel Engine
  • Draw Stream Binning Rasterizer
  • Primitive Shaders
  • Rapid Packed Math
  • 512TB Virtual Address Space
Tile based Rasterization. Primitive Shaders. Next Generation Compute Unit. Its completely new architecture.
GCN 2.0. Finally.

Interesting bit is the 4x power efficiency. Well, lets wait and see if they can deliver it.

P.S. About Vega: AMD did the proper job demoing it on New Horizon Event. That way, there is no room for speculation, and potential overhyping the hardware. People truly believe that 300W GPU will be 10% faster than GTX 1080 in Doom in 4K. So if in the end, reality will be different, the perception of it will only be better.

Have you seen any speculation about Vega capabilities in the mainstream and forums, before and after the event? ;) Yeah, marketing department did proper job here ;).
 
Last edited:
http://www.freepatentsonline.com/y2016/0371873.html
First patent for the technology that will be in Vega architecture. Its not just Tile-Based Rasterization, but also culling technology for the tiles that are not visible to the observer. Its much improved technique, to increase the efficiency and throughput.
Second patent:
http://www.freepatentsonline.com/20160085551.pdf
Is related to this:
ur0HalTTnxZZ1fNVPtgx1WM-D6t1X5DGRCqpweAppX0.png

This is another technology that increases efficiency of the CU's/helps to fully utilize them.

Also it appears that my information about Vega architecture, and redesigned shader engines is correct. 64 CU GPU will have 8 shader engines, and 48 CU GPU will have 6 shader engines.

There is another part of the information, but that needs to be further confirmed.

Also 512TB Virtual Adress Space means there is 49 bit address allocation. So basically this is physical implementation of HSAIL and HSA 2.0. Unified memory, and thats why High Bandwidth is called cache, not memory. This is architecture feature, so it will be available also on Raven Ridge APUs.
 
Last edited:
So we got new Vega details today. The big takeaways is it looks like a generational leap in design from Fiji/Polaris, has 2 HBM stacks and has a huge die at ~520 mm2 (bigger than GP102). While AMD didn't announce a shipping date, if they follow the Polaris path it should ship sometime in Q2.

My fear is that this chip is too big to reasonably put in the current mac pro form factor. I think one of the biggest reasons we haven't seen a revised tube is that every higher performing GPU that has come out since Tahiti has been bigger and hotter (save Polaris 10).
 
Summary of CES so far: AMD and Nvidia have great stuff in their pipelines, but didn't announce anything. Intel shows no sign of innovation at all, but still announces a load of new CPUs. :rolleyes:

Vega looks promising, I hope they'll deliver. I didn't expect a product announcement today, but at least some hard numbers. All we got is a few seconds of DOOM gameplay on an unknown Vega variant. Performance looks neat, but I hope it isn't the biggest Vega. It should perform a lot better than the 1080, with almost twice the die size...
 
Best thing IMO for Vega architecture, is the fact that from now on, there will be no need for the developer to manage the resources available to the GPU. The GPU will do it by itself.


There is important information.

Fiji was capable of registering only 4 Plygons per clock, per 4 shader engines. Vega is capable of registering up to 11 Polygons per clock, per 4 shader engines. The thing is... 4096 GCN core chip in Vega architecture has 8 Shader Engines, instead of 4 SE's in 4096 GCN core Fiji ;). So it will register up to 22 Polygons per clock, total, not just 4 ;).
 
I have been getting knowledge over past few days about Vega architecture, and how it works, to try to understand what we are looking at.

First thing:
AMD-VEGA-VIDEOCARDZ-21.jpg

Programmable Geometry Pipeline is a hardware feature that is just variation of FOV Rendering technique but executed on hardware level, not software. The tricky part here is that Developers have to program the game or game engine for the GPU to use and cull the unseen Polygons from the scene rendered, to save resources of the GPU, and to save power, or use it for other things. Part of it is of course Primitive Shaders feature.

Secondly: High Bandwidth Cache, which simply is HBM2 memory. AMD thinks that the amount of RAM will become less and less important and what truly will become important is bandwidth of it. Unified Memory has finally hardware approach, to get it finally working at decent bandwidth, and low latency.

Thirdly. Massively improved geometry performance. According to AMD Fiji chip was capable of registering only 4 Polygons per 4 Shader Engines per clock. Vega is capable of registering up to 11 polygons per 4 shader engines, per clock.

Forth. Currently we have seen the demo in both Doom and in Star Wars: Battlefront in 4K resolution. In Doom, the GPU was 15% faster than OC'ed GTX 1080, and in SW:BF it was around 45% faster than Fury X.

What this means. There is no game ready drivers. Some of hardware features have to be implemented by developers to get use of them, some of the hardware features can be jest added to the drivers, and off it will work. The difference will be that devs will have less work to do with managing the resources, because the GPU will handle them by itself, thanks to Tile-Based Rasterization. However, Programmable Geometry Pipeline will not be available till devs will update the software for it.

Fifth. Vega is most likely the base of Project Scorpio console APU GPU. 3072 GCN cores, clocked at under 1000 MHz, with 384 bit memory bus of GDDR5 memory. Its modified Vega, because desktop Vega has only HBM2 memory controller. Its built for APUs, and high-end desktops.

And lastly. The GPU is not faster than Pascal Titan X, without optimizations made by developers. When they will design games with it mind, and those who will make them for Project Scorpio and PC, will also bare it mind, big Vega will be not only faster, but also more efficient. The GPU architecture was designed with future in mind. So it will only make itself better over time, like every previous GCN iteration did. Don't be discouraged by the initial performance, at least in games. In compute it is true monster, and you can mark it here.

Also, Vega is the most advanced GPU architecture we have seen to this day, it beats even GP100.

This is all what I can say after initial look at it.
 
I have been getting knowledge over past few days about Vega architecture, and how it works, to try to understand what we are looking at.

First thing:
AMD-VEGA-VIDEOCARDZ-21.jpg

Programmable Geometry Pipeline is a hardware feature that is just variation of FOV Rendering technique but executed on hardware level, not software. The tricky part here is that Developers have to program the game or game engine for the GPU to use and cull the unseen Polygons from the scene rendered, to save resources of the GPU, and to save power, or use it for other things. Part of it is of course Primitive Shaders feature.

Secondly: High Bandwidth Cache, which simply is HBM2 memory. AMD thinks that the amount of RAM will become less and less important and what truly will become important is bandwidth of it. Unified Memory has finally hardware approach, to get it finally working at decent bandwidth, and low latency.

Thirdly. Massively improved geometry performance. According to AMD Fiji chip was capable of registering only 4 Polygons per 4 Shader Engines per clock. Vega is capable of registering up to 11 polygons per 4 shader engines, per clock.

Forth. Currently we have seen the demo in both Doom and in Star Wars: Battlefront in 4K resolution. In Doom, the GPU was 15% faster than OC'ed GTX 1080, and in SW:BF it was around 45% faster than Fury X.

What this means. There is no game ready drivers. Some of hardware features have to be implemented by developers to get use of them, some of the hardware features can be jest added to the drivers, and off it will work. The difference will be that devs will have less work to do with managing the resources, because the GPU will handle them by itself, thanks to Tile-Based Rasterization. However, Programmable Geometry Pipeline will not be available till devs will update the software for it.

Fifth. Vega is most likely the base of Project Scorpio console APU GPU. 3072 GCN cores, clocked at under 1000 MHz, with 384 bit memory bus of GDDR5 memory. Its modified Vega, because desktop Vega has only HBM2 memory controller. Its built for APUs, and high-end desktops.

And lastly. The GPU is not faster than Pascal Titan X, without optimizations made by developers. When they will design games with it mind, and those who will make them for Project Scorpio and PC, will also bare it mind, big Vega will be not only faster, but also more efficient. The GPU architecture was designed with future in mind. So it will only make itself better over time, like every previous GCN iteration did. Don't be discouraged by the initial performance, at least in games. In compute it is true monster, and you can mark it here.

Also, Vega is the most advanced GPU architecture we have seen to this day, it beats even GP100.

This is all what I can say after initial look at it.

Tile based rasterization and Programmable Geometry pipeline has nothing to do with each other.
 
Tile based rasterization and Programmable Geometry pipeline has nothing to do with each other.
Thats not what I meant there ;). I pointed out that some of hardware features will be available from the start, for some we will have to wait till devs will adopt them.

Nowhere in that bolded paragraph I meant that they are bound together.
 
Don't confront marketing with technical facts.... ;)
Aiden, please for once read in context...
Thats not what I meant there ;). I pointed out that some of hardware features will be available from the start, for some we will have to wait till devs will adopt them.

Nowhere in that bolded paragraph I meant that they are bound together.
 
Thats not what I meant there ;). I pointed out that some of hardware features will be available from the start, for some we will have to wait till devs will adopt them.

Nowhere in that bolded paragraph I meant that they are bound together.

Still has nothing to do with developers managing resources. Tile based rasterization is just rendering method that saves up the resources by only rendering the what is showing on the screen and not the ones hidden. That method is the reason why Nvidia gained big boost in efficiency going from Kepler to Maxwell while staying on same 28nm nod.
 
  • Like
Reactions: AidenShaw
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.