Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Still has nothing to do with developers managing resources. Tile based rasterization is just rendering method that saves up the resources by only rendering the what is showing on the screen and not the ones hidden. That method is the reason why Nvidia gained big boost in efficiency going from Kepler to Maxwell while staying on same 28nm nod.
It is done by the GPU automatically.
In previous versions of the GCN architecture, Rasterizers were connected to memory controller first.
AMD-VEGA-VIDEOCARDZ-32.jpg

With Vega that has changed.
AMD-VEGA-VIDEOCARDZ-33.jpg

Rasterizers speak directly to L2 cache. Thats why there is no need for managing the resources, to get best out of it, the GPU will manage itself. That is the difference about which I was talking about. And this will be change apparent in the first place, because it will be seen because of the nature of the architecture. Without any optimization done by developers.
 
It is done by the GPU automatically.
In previous versions of the GCN architecture, Rasterizers were connected to memory controller first.
AMD-VEGA-VIDEOCARDZ-32.jpg

With Vega that has changed.
AMD-VEGA-VIDEOCARDZ-33.jpg

Rasterizers speak directly to L2 cache. Thats why there is no need for managing the resources, to get best out of it, the GPU will manage itself. That is the difference about which I was talking about. And this will be change apparent in the first place, because it will be seen because of the nature of the architecture. Without any optimization done by developers.

Yes, it is automatically done by GPU. What I am saying is it has nothing to do with optimization done by developers. They don't touch anything on how GPU renders the screen, ever lol.
 
I particularly like the new NCU concept.
The 1/16 ratio is a bit of a let down but with Vega 20 1/2 is coming back finally.
I guess we'll have to wait for the performance metrics to see how optimized it is.
[doublepost=1483870285][/doublepost]Another 500 series (this time 570) showed up in a Samsung laptop.
These seem to be OEM rebrands of 400 cards.
Maybe mobile lineup will be 400 rebrands and desktop parts will be "new" Polaris 12 or Vega 11 and 10.
But this was a RX 570 and not a 570M, maybe a typo?!
 
You know that Vega GPU demoed by AMD? Well, the engineering sample have had 6+2 pin connector... ;)

And it is the bigger one of the two Vega GPUs. So after all, the slides that Videocardz have provided at least are based on something from real world(Engineering Sample at least will have maximum board consumption of 225W).
 
GTX 1080 in 4K Doom Vulkan scores on average 50-52 FPS, for Founders Edition. Fastest, and most power hungry GTX 1080 AMP! Extreme from Zotac scores 58 FPS on average while consuming 222W of power.
power_average.png

doom_3840_2160.png


Radeon Vega averages 70 FPS in Doom while not exceeding 225W of power. We are looking at Titan X levels of performance. Factually, Titan X scores for this game 77 FPS.
 
  • Like
Reactions: bmxfr
I'm curious how Vega 10 vs Vega 20 looks. Apple seems like they're going to use Vega 10 at this point, but Vega 20 will drop (supposedly) six months later and be much improved. But AMD's product matrix makes it look like Vega 20 may be a much higher end/expensive option.

Not that Vega 10 is bad. A down clocked Vega 10 will still be a very nice Mac Pro GPU.

Also, without Vega 11 showing up yet, any March iMac update would seem to be stuck at Polaris.
 
Still has nothing to do with developers managing resources. Tile based rasterization is just rendering method that saves up the resources by only rendering the what is showing on the screen and not the ones hidden.
I don't think that's its purpose. Smart rasterisers try not to render what's hidden anyway. A tiled-based rasteriser will divide the scene in squares (tiles) and render each one separately. A traditional rasteriser will render the scene line by line or column by column. In theory, ressources (textures...) needed to render a line can be diverse as they are used by the whole scene horizontally. In a tile, which is a small square region that may cover a single object, ressources are less diverse, so they can be stored in fast memory caches.
 
Want to know what is Polaris 10 XT2? Polaris 10 on new process. 14 nm LPE was used to manufacture Apple A9 chip. 14 nm LPP is the process on which Polaris 10 and 11 are made. Samsung then had 2 new revisions of the same process: 14 nm LPC, and 14 nm LPU. LPC has 10% higher performance than LPP in the same thermal envelope, and LPU has 15% higher performance in the same thermal envelope than LPC. AMD can use both Samsung and GloFo fabs for their 14 nm GPUs.

Situation of Hawaii/Grenada Chips. Hawaii XT was 250W chip with 1000 MHz core clock, that was drawing around 267W under load. Grenada XT was 250W chip that was drawing 264W under load but was clocked 5% higher. All thanks to advanced 28 nm process. So expect similar situation with Polaris chips. We can expect higher core clocks, but not lower power consumption. On the other hand, amount of 6pin connector only Polaris XT GPUs will be higher than currently is.
 
Last edited:
For the nMP, they'd rather cut on the power draw than increase clocks.
If Polaris ever gets to the nMP that is.
It could be all Vega, 10 and 11.
By the time it comes out Polaris might be gone already.
 
They will, you'll see. :)
But not the one some of us would want, upgradeable.
If it comes out with all the stuff I've been saying (wishing) it will be great, even if with no way to upgrade again.
Worse, following the rMBP trend, the SSD could be hardwired to the GPUs to take advantage of Vega's huge "cache" access. I guess the Radeon Pro SSG might find it's home there.
Not sure if this is good though!!
 
On Sapphire site one can see that there are many RX 480 versions:
  • Sapphire Radeon™ RX 480 8G D5 1266 MHz
  • SAPPHIRE NITRO+ Radeon™ RX 480 8G D5 OC 1342 MHz
  • SAPPHIRE NITRO+ Radeon™ RX 480 4G D5 OC 1306 MHz
  • Sapphire Radeon™ RX 480 4G D5 1266 MHz
  • SAPPHIRE NITRO Radeon™ RX 480 4G D5 OC 1306 MHz
Are all these versions working on Mac after editing the AMDRadeonX4000.kext file?
 
U.2 drives are typically 2.5".

Forget 3.5" bays, unless you want to add a few 10TB or bigger drives for archiving.
Preisely my point was that 3.5" is not finished in desktops in the same way 2.5" is not finished in laptops.
 
I've read about you may need to remove the back cover to install an RX 480 into cMP's PCIe slot#1... does this happens in all RX 480 models? Are there RX 480 model that fits better than other in cMP?
 
I've read about you may need to remove the back cover to install an RX 480 into cMP's PCIe slot#1... does this happens in all RX 480 models? Are there RX 480 model that fits better than other in cMP?
I have a Sapphire non-OC version, which fits perfectly in my 3,1 slot 1.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.