Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The first "review" is out for the frontier edition. Performance and efficiency are underwhelming. Gaming its consistently beaten by the GTX 1080, and in professional apps it does better, but it still seems consistently beaten by its Pascal based competitors. The gaming performance is especially concerning, as its consuming 100+ W more than the GTX 1080 while getting worse performance. Lets hope its just some teething problems with the new architecture and AMD can work out the performance issues.
 
  • Like
Reactions: fendersrule
The first "review" is out for the frontier edition. Performance and efficiency are underwhelming. Gaming its consistently beaten by the GTX 1080, and in professional apps it does better, but it still seems consistently beaten by its Pascal based competitors. The gaming performance is especially concerning, as its consuming 100+ W more than the GTX 1080 while getting worse performance. Lets hope its just some teething problems with the new architecture and AMD can work out the performance issues.
That review makes me even more confident that my recent purchase of twenty GTX 1080 Ti cards was a smart move.

300 watts for that level of performance? What a letdown from what the local AMD hype machine was claiming.

Especially embarrassing since the the AMD hype machine was criticizing Intel CPUs for being hotter and much faster - and now we have Vega being hotter and slower. ;)

Another underwhelming release by the red team.
 
Last edited:
  • Like
Reactions: tuxon86
Another factor is that even though its advertised as a 13 TFLOP card, it can only maintain a clock speed of 1440 Mhz, meaning that its only actually a 11.8 TFLOP card. That iMac Pro is going to sound like a jet engine if it has to dissipate a 300 W GPU with a single blower fan.

I can't get over just how inefficient it is. I wonder if the Global Foundries process is that much worse than TSMC.
 
Another factor is that even though its advertised as a 13 TFLOP card, it can only maintain a clock speed of 1440 Mhz, meaning that its only actually a 11.8 TFLOP card. That iMac Pro is going to sound like a jet engine if it has to dissipate a 300 W GPU with a single blower fan.

2014 iMac maxed out at 288 with a single blower. 2015 240W. All without a jet engine reputation.
iMac Power consumption and thermals ( https://support.apple.com/en-us/HT201918 )

Apple's claim for the iMac Pro is only 11TFLOPs, so they can down clock from 1440. They 2TFLOPs back from the bleeding edge all along.


I can't get over just how inefficient it is. I wonder if the Global Foundries process is that much worse than TSMC.

A card that is for developers and leading edge folks to buy and use as a tool compose optimized doesn't have optimized code yet. Well yeah, that is what the card is for.

There is very little evidence this is primarily about silicon process differences at all. Different architectures , different code, and different objectives. How would accurately measure process differences through all of that is a deep mystery.
 
The first "review" is out for the frontier edition. Performance and efficiency are underwhelming. Gaming its consistently beaten by the GTX 1080, and in professional apps it does better, but it still seems consistently beaten by its Pascal based competitors. The gaming performance is especially concerning, as its consuming 100+ W more than the GTX 1080 while getting worse performance. Lets hope its just some teething problems with the new architecture and AMD can work out the performance issues.

Let's just hope, but I think it's only fair to have some bad feelings about this. Given our only evidence of the Vega line, the results are petty damned abysmal. I'm starting to think the best possible case is AMD will release the Vega RX line and it will turn out to be like the RX470/480 series and serve as "mid-range", yet leaving nVidia again with no competition and forcing prices to remain high.

And that high-wattage **** has to go. I mean like, right now. No one wants a 300w gaming card. That's like buying a 10MPG vehicle. If you get 10MPG, it better go 1000mph.

My prediction? I believe that AMD will release the RX series, and the fastest GPU will BARELY keep up with the GTX 1070 while being 30-40%+ higher TDP.

I'm only in it for the lower prices for us consumers. I could care less about team red vs team blue. Right now, we are all getting screwed.
 
Last edited:
Yeah repeat of the last two years. Clock for clock Radeon has been better for a long time. Their architecture is sound but if they can't manage power and temps then they can't match Nvidia's clock speeds. That's how Nvidia pulls ahead.
 
  • Like
Reactions: itdk92
Have you tried the RX 580 in 10.12.6 Beta or High Sierra? How does it perform?
Also, which RX 580 did you manage to get your hands on? Sapphire Pulse?

Not yet I just went to pick up a cMP but it sold. I'll keep trying this weekend or I'll get an eGPU case for my MBP.

I have the Powercolor Red Dragon 8GB. Pic attached. It's currently sitting in a PC mining with a 295x2.
 

Attachments

  • IMG_3876.JPG
    IMG_3876.JPG
    1.5 MB · Views: 81
These are ideal mining clocks I have reached after a lot of trial and error and observing averages. No ROM changes, just MSI Afterburner.

If you mine with the 580 8GB 8000MHz version, you can reduce voltage by 93mv and underclock the GPU down to 1000Mhz. You lose no performance. Overclocking is unstable though.

If you mine with the 290X, 390X, 295X - you can reduce voltage by 100mv. Leave everything else default. There's no gain from under or over clocking. Modified timings can actually result in less performance.

If you mine with GTX 1070 - set power limit to 70%, underclock the GPU by 300-400Mhz, overclock the memory to 4450 (8900Mhz effective).

GTX 1080 is bad for mining. The memory type doesn't have ideal timings.

390X is the best single GPU miner out of the box, but power hungry.

GTX 1070 is the best single GPU miner after the tweaks above.

31MH/s is the best long term average you can get on a single GPU due to limits of the Ethash algorithm. Benchmarks can show higher numbers but real world performance is not the same.
 
  • Like
Reactions: Synchro3
2014 iMac maxed out at 288 with a single blower. 2015 240W. All without a jet engine reputation.
iMac Power consumption and thermals ( https://support.apple.com/en-us/HT201918 )

Apple's claim for the iMac Pro is only 11TFLOPs, so they can down clock from 1440. They 2TFLOPs back from the bleeding edge all along.

I mean a blower is still a blower. Apple is just adding a second blower for the iMac Pro. If Vega is 250 W at 11 TFLOPS than the blower will probably be fairly loud. I just purchased the 2017 27" iMac and its surprisingly audible when under CPU or GPU loads.
 
The first "review" is out for the frontier edition. Performance and efficiency are underwhelming. Gaming its consistently beaten by the GTX 1080, and in professional apps it does better, but it still seems consistently beaten by its Pascal based competitors. The gaming performance is especially concerning, as its consuming 100+ W more than the GTX 1080 while getting worse performance. Lets hope its just some teething problems with the new architecture and AMD can work out the performance issues.

The driver for the Radeon RX Vega is not using all (any?) of the GCN5 features. Here's a video which shows at about 3:49:00 that the tile based renderer is not used at the moment. It is not clear which of the new GCN5 features are used at the moment, but the working theory is that the driver works in some Polaris compatibility mode.

I wonder why AMD did release the card such as early with these limited drivers.
 
The driver for the Radeon RX Vega is not using all (any?) of the GCN5 features. Here's a video which shows at about 3:49:00 that the tile based renderer is not used at the moment. It is not clear which of the new GCN5 features are used at the moment, but the working theory is that the driver works in some Polaris compatibility mode.

I wonder why AMD did release the card such as early with these limited drivers.

Because they hyped for months their release dates.
 
From the PC World video it seemed Vega is already faster than Xp for workstation applications.
 
The driver for the Radeon RX Vega is not using all (any?) of the GCN5 features. Here's a video which shows at about 3:49:00 that the tile based renderer is not used at the moment. It is not clear which of the new GCN5 features are used at the moment, but the working theory is that the driver works in some Polaris compatibility mode.

I wonder why AMD did release the card such as early with these limited drivers.

Hmm, "future driver improvements" is a common refrain when it comes to AMD hardware. I guess we will find out for sure if and when the gaming version of the card is released.

The execution of this launch is worse than usual for AMD. Its been delayed and now that its actually out they didn't even bother to send it to reviewers. Its like they are ashamed of it.

From the PC World video it seemed Vega is already faster than Xp for workstation applications.

Are you sure about that? It looks pretty hit or miss to me.
 
I guess Ryzen launch does not taught people anything.

For example. In scientific and engineering applications Ryzen 7 is clock for clock, core for core on par with 6900X. In gaming at release it was slower than Intel CPUs. Everybody called it a failure. Few months from that - software matured, and became optimized, and it tied with Intel. Anybody will call it a failure, now?

Vega appears to have per clock higher performance than Fiji in Compute. In games it performs just like OCed Fiji. Hmmmm...

Secondly. It is not the drivers. None of applications are using Vega architecture features that increase performance. Drivers may report to the application the features, but if devs did not used them, the games will not benefit from them.

The reason why AMD released GPU in this form is very simple. MONEYZ! And secondly, because it is brand new architecture, developers can have earlier access to it, and its features, and optimize their software for this arch, and extract its performance fully. And also AMD can earn MONEYZ!, because Devs have to buy the GPU, obviously.

P.S. For me it is absolutely bonkers that reviewers use SPECperf, rather than other professional applications to test the GPU. Why?
 
Amd suffering from foot in mouth disease again, what a surprise...
Oh well, i'm selling my rx480 on ebay and ordering a gtx 1080ti in a few. It will be quite a while before i go red again.
 
For example. In scientific and engineering applications Ryzen 7 is clock for clock, core for core on par with 6900X. In gaming at release it was slower than Intel CPUs. Everybody called it a failure. Few months from that - software matured, and became optimized, and it tied with Intel. Anybody will call it a failure, now?

If Ryzen is so great, why isn't Apple using it in the iMac Pro? The answer is that its already been passed by Skylake-X. Once again AMD has to play the value card.

The reason why AMD released GPU in this form is very simple. MONEYZ! And secondly, because it is brand new architecture, developers can have earlier access to it, and its features, and optimize their software for this arch, and extract its performance fully. And also AMD can earn MONEYZ!, because Devs have to buy the GPU, obviously.

Crappy launches is not how to make "MONEYZ". When Nvidia releases a GPU, whether its a Titan or an entry level model, they try and keep rumors to a minimum, they announce it publicly, reviews go up around the same time pre-orders start, and then the card hits the shelves a week or two after that. They don't need their fanboys justifying why it doesn't perform very well, because they have been delivering the best performance for years now. That seems like a pretty good recipe for making "MONEYZ".

P.S. For me it is absolutely bonkers that reviewers use SPECperf, rather than other professional applications to test the GPU. Why?

Whats bonkers is that AMD didn't send out review samples of their first flagship card in two years, so we are stuck with one single review until everyone has time to do all their testing.
 
  • Like
Reactions: tuxon86
Is the RX480 now natively supported in Sierra (OSX 10.12)? I know RX460 is. Just a bit hesitant to jump all the way to High Sierra as I have some legacy hardware (scanner) that I know works fine on 10.12 but has questionable support on 10.13.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.