Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Yeah, but you can turn these voltages too low or you won't get enough chips to qualify to run at these voltages. You need a bit of headroom to maintain stability.



AMD themselves have said its coming in 2017. This is the type of thing public companies don't lie about since they are accountable to their stockholders. There were "whispers" that Polaris was coming early this year, had 2560+ cores, was a 125 W card and could match Fury. None of these were correct.
1120 MHz at 0.96v is stock core clock, and stock voltage that is in GPU bios.

RX 480 at stock clock IS running at 0.96v. You didn't read that graph properly. All of information on that graph comes from BIOS settings of each code name for each GPU.

As for Vega, lets wait and see. There will be third GPU coming out "soon" as Raja Koduri said at Siggraph presentation. You can hear and see it at the beginning of the keynote of AMD presentation.
 
As for Vega, lets wait and see. There will be third GPU coming out "soon" as Raja Koduri said at Siggraph presentation. You can hear and see it at the beginning of the keynote of AMD presentation.

Remember when the 3 Ghz Power Mac G5 was coming "soon"?
 
Remember when the 3 Ghz Power Mac G5 was coming "soon"?
No, I don't remember. I was not interested in that time in technology.

Secondly, it has nothing to do with what will happen with Vega.
 
Secondly, it has nothing to do with what will happen with Vega.

It has everything to do with it. "Soon" is what companies say when they don't know when the tech will land and don't want you to buy competitors products. Vega is still a ways from release and Nvidia likely already released a GPU today that can beat it in GP102/Titan X.
 
  • Like
Reactions: Flint Ironstag
AMD knows exactly when the GPU will land on market...

Secondly, it does not have anything to do with Vega 10. It is only your assumption.
 
Vega.gif


When i hear vega this is what i see xD
 
  • Like
Reactions: mostafiz28
When i hear vega this is what i see xD

I must be old because when I hear Vega, I think of the Chevy:

75-Vega-ad.jpg


My neighbor had a Kammback (not pictured). The best thing about the Vega is that when you walked out to your driveway and saw what car you had, you were inspired to walk instead, thus getting some exercise.

Sorry about the OT post. I'm still enjoying this AMD thread.
 
Last edited:
In addition to post about Tile-Based Rendering technology and potential new Rasterizer on Vega 10 I will repost something from other forum:
sebbbi at beyond3d forum said:
I would assume that the tile size matches the ROP cache size. However Nvidia hardware doesn't have dedicated ROP caches, so I'd assume that the tile buffer resizes on L2 cache (where they usually keep the ROP outputs). Did you pixel count the tile sizes? My guess would be something between [32x32, 128x128] as that's close to the footprint of traditional ROP caches.

Some years ago I did ROP cache experiments with AMD GCN (7970) in order to optimize particle rendering. GCN has dedicated ROP caches (16 KB color, 4 KB depth). In my experiment I split the rendering to 64x64 tiles (= 16 KB). This resulted in huge memory bandwidth savings (and over 100% performance increase), especially when the overdraw was large (lots of full screen alpha blended particles close to the camera). You can certainly get big bandwidth advantages also on AMD hardware, as long as you sort your workload (by screen locality) before submitting it.

It's hard to draw 100% accurate conclusions from the results. This doesn't yet prove whether Nvidia is just buffering some work + reordering on fly to reach better ROP cache hit ratio, or whether they actually do hidden surface removal as well (saving pixel shader invocations in addition to bandwidth). This particular test shader doesn't allow the GPU to perform any hidden surface removal, since it increases an atomic counter (it has a side effect).

To test HSR, you'd have to enable z-buffering (or stencil) and use [earlydepthstencil] tag in the pixel shader. This tag allows the GPU to skip shading the pixel even when it has side effects (DX documentation is incorrect about this). Submit triangles in back-to-front order to ensure that early depth doesn't cull anything with immediate mode rendering. I would be interested to see whether this results in zero overdraw on Maxwell/Kepler (in this simple test with some overlapping triangles and also with higher triangle counts).

It would also be interesting to know how many (vertex output) attributes fit to the buffer.

The new (Nvidia and Oculus) multiview VR extensions would definitely benefit from separating SV_Position part of the vertex shader to its own shader. This would also greatly benefit tiled rendering (do tile binning first, execute attribute shader later). I wouldn't be surprised if Nvidia did already something like this in Maxwell or Pascal, as both GPUs introduced lots of new multiview VR extensions.

I just wish Nvidia would be as open as AMD regarding to their GPU architecture :)
I wonder what effect you would get with new, proper scheduler for 4096 GCN cores, massive bandwidth from HBM, and 64 ROPs.

In the bolded part we have seen effects on frame rendering times, potentially.
 
http://www.benchmark.rs/artikal/test_sapphire_radeon_rx_470_nitro_8_gb-4169/7
http://semiaccurate.com/2016/08/04/amds-radeon-rx-470-review/
This is best value GPU I have seen in a long time.
http://www.tomshardware.com/reviews/amd-radeon-rx-470,4703-5.html
Standard, reference RX 470 - 125W of power consumption at 1206 MHz core clock. 50 MHz more - 20W consumed more.
https://www.computerbase.de/2016-08...tt_so_takten_die_asus_powercolor_und_sapphire

Well the higher voltage has gigantic impact on power consumption on this node, as we can see. It appears that this node, or at least GloFo/Samsung Process was designed for very low voltages.
 
Last edited:
  • Like
Reactions: javinv
Like I said before, the 470 is my GPU of choice right now.
I like power of 2 numbers and it's specs are just that.
I would like to see the mem clocked higher though. Maybe 475?
 
Like I said before, the 470 is my GPU of choice right now.
I like power of 2 numbers and it's specs are just that.
I would like to see the mem clocked higher though. Maybe 475?

The problem is the price. It's only $20 cheaper than the 4gb 480 model. This really should be $150 and probably will be when all the cards are fully out and stock is plentiful.
 
The problem is the price. It's only $20 cheaper than the 4gb 480 model. This really should be $150 and probably will be when all the cards are fully out and stock is plentiful.

Yeah, the RX 470 is a great little card but it is priced pretty close to the RX 480. Why would you go with an aftermarket 470 if you can get a 480 for cheaper?
 
Yeah, the RX 470 is a great little card but it is priced pretty close to the RX 480. Why would you go with an aftermarket 470 if you can get a 480 for cheaper?
[doublepost=1470353961][/doublepost]IF (and it's a big if) Apple fully supports the RX480 in macOS Sierra but Sierra only runs on cMP 4,1 and later, is there any hope of getting support for the RX480 in cMP 3,1 and 2,1 running 10.11.6?
 
  • Like
Reactions: tuxon86
Put it under "self-serving ATI press releases".

Synthetic aperture arrays go back 30 years or more. There are probably much more impressive CUDA-based synthetic aperture arrays online - but you didn't see them.

Really, don't embarrass yourself with posting ATI press releases like this.
It is nothing important for you.

I am not blown away by number of hardware(I am the fanboy here?) but I am blown away by project and the task. Mapping the universe is pretty huge thing from science perspective, don't you think?

Technology here is absolutely meaningless.
 
Last edited:
Seems like the 460 is also not that great after all, still gets beaten sometimes by older tech.

I have been pretty disappointed with the performance and efficiency of Polaris. It doesn't seem like they hit their 2X performance per watt target. Hopefully Vega does better.

If Apple came out with a Polaris 10 based mac pro as the high end GPU it would only improve SP compute from 7 TFLOPS to ~10 TFLOPS and DP compute performance would go down. Not great for 3 years worth of GPU improvements.
 
So far I see that it performs as it "should" perform.

Power consumption appears to be on 60W-ish levels. Where are the problems? The only problem I have is that this GPU should be around 79$ for 2 GB model. It will not be worth more.

P.S. Strange thing is that this GPU in some games appears to be slower than R9 260X with exactly the same amount of GCN cores, and slower core clock, and memory than RX 460. So take it with huge grain of salt.

P.S.2 On the other hand as I see, GTX 750 Ti on newegg is still in 120-ish$ levels of price, and GTX 950 is 150-160$.
 
Last edited:
I think there are a lot of people who would love to have nVidia chips in the new Mac Pro complaining about the performance. I too hope that Apple makes a solution that allows the pros to chose whatever GPU they want. I'd also like to see them allow aftermarket power supplies and the like added to the machines so that we as the consumer can add ANY card and however meant we want in the machines.

However, having said ALL of that, I'm pulling for AMD in BOTH the CPU and GPU space. I see them in the place Apple was in 96 and 97 fighting for their lives. And if Apple can help with that by awarding them OEM status, I'm for that. The Polaris chips, in my opinion have done what AMD said they would do. If you look at AMD's previous lineup objectively, Polaris delivers on their promises. Do they out perform nVidia card for card? No. Do they outperform nVida dollar for dollar? in the US? Absolutely!

I'm glad Apple included Polaris drivers in the MacOS Sierra. If they release a mini tower with at least two PCI slots and an RX 480 installed, I have a 2015 iMac 5k maxed out for sale to someone.
 
I think there are a lot of people who would love to have nVidia chips in the new Mac Pro complaining about the performance. I too hope that Apple makes a solution that allows the pros to chose whatever GPU they want. I'd also like to see them allow aftermarket power supplies and the like added to the machines so that we as the consumer can add ANY card and however meant we want in the machines.

However, having said ALL of that, I'm pulling for AMD in BOTH the CPU and GPU space. I see them in the place Apple was in 96 and 97 fighting for their lives. And if Apple can help with that by awarding them OEM status, I'm for that. The Polaris chips, in my opinion have done what AMD said they would do. If you look at AMD's previous lineup objectively, Polaris delivers on their promises. Do they out perform nVidia card for card? No. Do they outperform nVida dollar for dollar? in the US? Absolutely!

I'm glad Apple included Polaris drivers in the MacOS Sierra. If they release a mini tower with at least two PCI slots and an RX 480 installed, I have a 2015 iMac 5k maxed out for sale to someone.

Being able to add your own GPU to the mac pro would solve a lot of the complaints people have. Especially with the lack of mac pro updates and now that the 14/16 nm GPU generation is upon us.

I am also rooting for AMD but they are in a tough spot. It feels like they are being outmuscled by intel and nvidia. Intel has been crushing them in efficiency and its still unclear whether AMD's Zen will bring them back to parity. Intel still has a process advantage with its own 14 nm node being smaller than Global Foundries.

Nvidia has also been winning in GPU efficiency with Maxwell and Pascal and they has the ability to release more specialized cards for compute and gaming, whereas AMD has to release cards that are jack of all trades. Nvidia's 16 nm GPU release has been especially impressive. They essentially released 4 GPUs (GP100, GP102, GP104 and GP106) across most of the market in the span of a few months. AMD has to spread this out over at least a year because they don't have the same engineering throughput Nvidia has.

When it comes to Polaris pricing, while it has great performance per dollar, it looks like AMD is not making much profit on each card sold. Why would the RX 470 be within $20 of the faster RX 480 unless they are cutting their margins really close. Nvidia released the GTX 1060 which is a smaller GPU die that they are selling for more with roughly equivalent performance. Obviously we don't know sales numbers but Nvidia has the benefit of the GTX 1080/Titan X being halo products that can help sell its mainstream parts.

AMD's best bet is to try and leverage combining their CPUs and GPUs on the same chip like they have been doing for consoles. Intel has been doing this with its processors it bundles with iris graphics so it remains to be seen if there is a market for CPUs with more powerful embedded GPUs.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.