Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
LOL - they're comparing the latest unreleased ATI GPU chips to the Nvidia chips released a year ago.

I guess that they're be red-faced if they compared to the Nvidia Volta chips announced a week ago.

Right, given that GV100 absolutely destroys GP100 and the other Pascal chips in deep learning tasks, I stand by my earlier assessment. If the best they can do is 35% compared with a year-old GPU, they're in trouble at the high end IMO.
 
LOL - they're comparing the latest unreleased ATI GPU chips to the Nvidia chips released and shipping a year ago.

I guess that they're be red-faced if they compared to the Nvidia Volta chips announced a week ago.
Which is the GP100 graphics card?
 
Which is the GP100 graphics card?

The one in the DXG-1, Quadro GP100 and Tesla P100.

ssp_430_575px.jpg


http://www.anandtech.com/show/11102/nvidia-announces-quadro-gp100
 
That can be bought today for $6000 instead of $9000.

Let's see how much Frontier costs.

Or more importantly, how many they sell and how much profit they make off of them. It's all well and good to price it at $1000 if nobody buys one because they're all waiting for Volta instead, resulting in AMD continuing to lose money every quarter. We'll see, I guess.
[doublepost=1494978305][/doublepost]
True, but actually the DXG-1 has the GP100 chip on a special NVlink form factor card.

The Quadro GP100 and Tesla GP100 have the same GPU chip on a standard form factor PCIe card - with Display Port output on the Quadro, and displayless (compute-only) for the Tesla.

The Titan Xp and GTX 1080Ti use the GP102 chip.

Exactly right, yep. I would assume a similar set of products for Volta as well (NVLink2-based special systems like DXG-1V or whatever it's called, as well as Quadro/Tesla and probably Titan cards as well).
 
Or more importantly, how many they sell and how much profit they make off of them. It's all well and good to price it at $1000 if nobody buys one because they're all waiting for Volta instead, resulting in AMD continuing to lose money every quarter. We'll see, I guess.
Frontier does not seem to be competing with Volta this year, but Quadro and Titan.
[doublepost=1494978797][/doublepost]
I thought he was talking about the Volta workstation.
 
Where did they say it'd be a fraction of the price of a GP100/GV100?

Thats fair. I was basing this off of previous Radeon Pro type cards that usually land in the $1500-$2000 range. As it is there doesn't seem to be a price yet. AMD has said they are seeking a 2x price/performance ratio advantage over Pascal. I do think its fair to say that given that we expect some form of Vega in the consumer space priced < $700, even the Vega Frontier Edition will likely be cheaper than GP100, which the cheapest flavor is still north of $5,000.

Right, given that GV100 absolutely destroys GP100 and the other Pascal chips in deep learning tasks, I stand by my earlier assessment. If the best they can do is 35% compared with a year-old GPU, they're in trouble at the high end IMO.

GV100 is a monster, but I don't expect Apple to be offering $5k+ GPUs in their computers any time soon. GV100 probably won't be available as an add-in card this year so for at least 6 months Vega will be competing with Pascal. I think if we are being realistic, Apple would probably be considering the more consumer oriented Nvidia cards such as GTX 1080, GTX 1080 Ti, Titan, etc. If consumer volta has a lot of the machine learning and DP FP stripped out Vega may fare better.

Also for Apple's purposes they care about more tasks than just machine learning. Those demos where they can edit 8k video using the integrated SSD is definitely something Apple would want to take advantage of.

Additionally Pascal's macos drivers are buggy so if this is the best nvidia can offer its no wonder Apple is stuck with AMD.
 
According to AMD in DeepBench AMD Vega GPU is 30-35% faster than GP100 chip in machine learning.
.....They also launched RX Vega Frontier Edition. 4096 GCN cores, 1.6 GHz, 16 GB HBM2, 13 TFLOPs of FP32 performance.

I watched most of the presentation. It didn't really launch. It is coming in June. They are going to talk about it until June. "Frontier" is likely just high priced, low volume, high margin. This is extremely likely to be priced far out of reach of most normal people.

AMD needs to ship before the Volta nose bleed priced product ships. Not that it will stay in the lead, but gotta get their first iteration HBMv2 out the door before competitor gets their second. ( You'd have thought AMD would have a lead given they shipped a HBMv1 product, but that really didn't transfer. )
[doublepost=1495004018][/doublepost]
....

I guess that they're be red-faced if they compared to the Nvidia Volta chips announced a week ago.

I doubt they have a Volta ( or even most P100 customers have a Volta) card. They had a P100 card at the meeting to do the side by side demo comparison with. The explicitly mentioned main objective it just to be on the comparison charts. It is not necessarily to win. At this point they aren't even commonly charted. Have to get into the Premier league first before can talk about winning the whole thing.

They have a solid plan. It will take time and execution. The bigger gap they have is in software library coverage, not in hardware. It took several people, months to tune up their DeepBench code. They need dozens, not thousands, of folks to do that over a broader set of libraries. It isn't easy, but it is tractable.
[doublepost=1495004611][/doublepost]
Right, given that GV100 absolutely destroys GP100 and the other Pascal chips in deep learning tasks, I stand by my earlier assessment. If the best they can do is 35% compared with a year-old GPU, they're in trouble at the high end IMO.

Depends upon if they can make the feature that isn't highlighted in that DeepLearning benchmark work well. Volta also has a variant with the SSD where could train over a "larger than HBMv2 is capable of" dataset. The workloads that have been tried are those that fit. There are different problem spaces that can be explored with Volta that Nvidia may not cover.

That may not be the mainstream ML market now but all AMD's is a subset where they have some leverage.

As usual this is de-evolving into some fan boy death match. It isn't only bout bragging rights on tech porn benchmarks. AMD can win without Nvidia loosing.
 
Last edited:
  • Like
Reactions: koyoot
Yeah, it was me who was the fanboy! :D
So they're about 9 months too late, and about to be absolutely crushed by GV100? Cool.
Vega is 530 mm2 die size chip, and is crushing the 9 month old 600 mm2 die size chip, from Nvidia.

It crushes on performance, and price to performance, because according to leaks, the GPU will cost 999$.
AMD announced in December that server made from 100 Vega GPUs will cost 129000$.
LOL - they're comparing the latest unreleased ATI GPU chips to the Nvidia chips released and shipping a year ago.

I guess that they're be red-faced if they compared to the Nvidia Volta chips announced a week ago.
The demo and results were performed real life, you half-wit. Is there any GV100 chip to compare to?
Link?
[doublepost=1494977165][/doublepost]
Link?
Why Am I supposed to do reasearch for you? There is enough Youtube reviews comparing RX 460 to RX 560 to see the difference between them. But hey, you will blame me for your inability to search for data.
As usual this is de-evolving into some fan boy death match. It isn't only bout bragging rights on tech porn benchmarks. AMD can win without Nvidia loosing.
I love this sentence.

Edit: A bonus from AMD marketing slides.
3df1516f09cec7e9aa5718f726d32d74.png
 
Last edited:
Is AMD turning east? Fugly cards there. Maybe some flashing LEDs next time.
Always had seriously looking cards, now they're competing with mobo/graphics cards makers for the worst looking cards?
I would expect a reference design to be sober.
End rant.

Specs and performance look impressive though. This time AMD might just deliver.

http://pro.radeon.com/vega-frontier-edition
 
Last edited:
Can anyone explain the differences?

Here RX 480 is smoking the GTX 1070 in professional applications on Ryzen platform. What the hell? How come nobody have seen this before? Even I have not spotted this before.
 
Can anyone explain the differences?

Here RX 480 is smoking the GTX 1070 in professional applications on Ryzen platform. What the hell? How come nobody have seen this before? Even I have not spotted this before.
AMD's OpenCL 2.x driver is able to use hUMA and therefore gives the CPU a holiday.
 
Last edited:
  • Like
Reactions: koyoot
I'd like to see, if Nvidia GTX 1070 would benefit of Ryzen CPU. In theory, hUMA should work with it too. And in this review they didn't test Nvidia with Ryzen.

I think that is why Nvidia has teamed up with IBM Power, because hUMA zero-copy is not working with Intel and dGPU. And they cannot trust AMD, their biggest competitor. Nvlink is there to help, but still wont fix the problem. So, for real HPC environment, Nvidia needs IBM.
 
Last edited:
I'd like to see, if Nvidia GTX 1070 would benefit of Ryzen CPU. In theory, hUMA should work with it too. And in this review they didn't test Nvidia with Ryzen.

I think that is why Nvidia has teamed up with IBM Power, because zero-copy is not working with Intel. And they cannot trust AMD, their biggest competitor. Nvlink is there to help, but still wont fix the problem. So, for real HPC environment, Nvidia needs IBM.
I am actually wondering if Zero-Copy is not working with Intel, because of Nvidia, or simply because Intel does not want to have anything to do with Nvidia? ;).

After all, Nvidia is direct competitor to Intel in HPC.
 
Vega is 530 mm2 die size chip, and is crushing the 9 month old 600 mm2 die size chip, from Nvidia.

Remember, this was only a single canned demo. We need many more data points before we deem one chip crushing the other. For instance I imagine GP100 would have much higher performance in DP compute tasks given its 5 TFLOPS of DP compute.

The demo and results were performed real life, you half-wit. Is there any GV100 chip to compare to?

Please refrain from name calling. You are doing it here and in other threads. Its really not necessary.
 
AMD is starting off with the wrong foot when it comes to showing Vega performance against competitors.
I think they should refrain from comparing, as usual test conditions are as it suits them, and this looks really bad for them.
Why not just state the card's performance and let people draw their conclusions. It would spare them the bad image they're getting for "rigging" the bench.
Boys will be boys...
 
...

I think that is why Nvidia has teamed up with IBM Power, because hUMA zero-copy is not working with Intel and dGPU. And they cannot trust AMD, their biggest competitor. Nvlink is there to help, but still wont fix the problem. So, for real HPC environment, Nvidia needs IBM.

hUMA works on Intel. The teaming with IBM is because IBM was already pursuing CAPI (https://en.wikipedia.org/wiki/Coherent_Accelerator_Processor_Interface) and NVLINK isn't all that different. IBM opening the doors to OpenCAPI just makes it that much easier.

"...In a sense, both CAPI and NVLink are specialized versions of PCI-Express. IBM leaves the underlying PCI-Express hardware alone and provides abstraction layers in its firmware and its chips to make CAPI devices look like they plug into a “hollow core” on the CPU and act like any other core when it comes to addressing shared memory. Nvidia has provided a much higher bandwidth interconnect than is available with PCI-Express 3.0 and is even better than PCI-Express 4.0 and will widen the gap more, we presume, with its second NVLink iteration. ..."
https://www.nextplatform.com/2016/05/04/nvlink-takes-gpu-acceleration-next-level/

The POWER architecture already had a PCI-e like, but lower latency interface opened up onto the core. If Nvidia wanted to they could possible make something of an Intel OmniPath to NVLink bridge that would do something similar. OmniPath hasn't rolled out to a much larger infrastructure than POWER at the moment so won't see that right now. Intel's stuff is going to be primary stuff on OmniPath for a years probably.

Until POWER can get a healthy and bigger OpenCAPI ecosystem going, IBM can spend time with tweaks to enable NVLINK mode. Don't be surprised if AMD doesn't come up with an OpenCAPI to InfinityFabric glue chip though. They'd be able to sit on POWER8-9 systems too if they wanted to put in the incremental work after finish getting Epyc fully launched and flowing.
 
AMD is starting off with the wrong foot when it comes to showing Vega performance against competitors.
I think they should refrain from comparing, as usual test conditions are as it suits them, and this looks really bad for them.
Why not just state the card's performance and let people draw their conclusions. It would spare them the bad image they're getting for "rigging" the bench.
Boys will be boys...
No, I found these Frontier numbers very interesting. If it costs $1000, it looks very compelling for individuals.
 
AMD is starting off with the wrong foot when it comes to showing Vega performance against competitors.
I think they should refrain from comparing, as usual test conditions are as it suits them, and this looks really bad for them.

In the context of this analyst presentation, that makes absolutely no sense at all. That is all analysts do is compare companies and the markets they are targeting. These folks aren't really end users. The primary purpose here is to convey whether AMD is worth buying as a company or not. The tech porn press is all over it because it is new porn. There will be more porn at Computex in a couple of weeks.

The second primary audience of this analyst day stuff is the system makers who do their own benchmarking on their own targeted apps. AMD's benchmarks are more a "are you tall enough to ride this roller coaster" litmus test. AMD is not being invited to design bake-offs. Now they increasingly are. That is in part due to what they are doing in these "pre tech before we can ship" demos. It isn't to prime newegg.com and amazon.com sales or next weeks "Fry's bargain buy".
[doublepost=1495039487][/doublepost]
No, I found these Frontier numbers very interesting. If it costs $1000, it looks very compelling for individuals.

The Radeon Pro Duo is $999 and the MI25 has yet to show up.

"...AMD previously indicated with the Radeon Instinct MI25 announcement that they would be targeting 25 TFLOPS or better of half precision (FP16) performance on their high-end Vega parts, and the Vega Frontier Edition will be delivering on the “or better” part of that ...."
http://www.anandtech.com/show/11403/amd-unveils-the-radeon-vega-frontier-edition

I doubt AMD is going to deliver the Frontier the same or cheaper than the Pro Duo and leave no room for the MI25. The Pro is two high yield ( because have been in production for almost a year ) and normal , very mature DDR5 memory and no interposer overhead. The Frontier edition is probably a very low yield GPU ( clocked at the extremely high range) and not very volume scalable HBMv2 memory that is substantially more expensive.

The point is to be less expensive than a $4-7K priced solution from your competitor. Something priced $2k-3.5K does that. IF AMD prices it in the $2.5K range and it sags back to $1.5K over a year they'd still be making margin.

They aren't trying to drive volume with this card. They are trying to grow a development ecosystem so that there will be software written and ready for when the volume priced card(s) are released.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.