Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
GPUs are good at code without inter-dependent state and conditional flow structures. CPUs are better when control flow is complex, and parallelism is weak.

Think of a GPU as a collection of tiny, underpowered CPUs.

A lot of that information looks pretty old, for example GPUs have had support for conditional flow control for over a decade at this point. It might still have a performance penalty if there is a lot of divergence between the threads in a warp (or the AMD equivalent), but it's not enough to make a GPU a bad target for a massively parallel program. The sheer number of execution cores in a GPU means it'll always be a better fit for tasks with a lot of parallelism.
 
A lot of that information looks pretty old, for example GPUs have had support for conditional flow control for over a decade at this point. It might still have a performance penalty if there is a lot of divergence between the threads in a warp (or the AMD equivalent), but it's not enough to make a GPU a bad target for a massively parallel program. The sheer number of execution cores in a GPU means it'll always be a better fit for tasks with a lot of parallelism.
I think that I said exactly that, but from the opposite viewpoint - if you don't have massive parallelism, a GPU might not be a good fit. If you have data dependencies, you often have limited parallelism. If you have complex control flows, GPUs might technically be able to be used - but CPUs are faster.

And "having conditional flow control" and "having efficient conditional flow control" are two different things. A GPU might be good for streams that are mostly sequential with a small number of branches, but fail to scale for streams with lots of conditionals and dependencies.

I have one rack of systems with 72C/144T and quad Titan X (Maxwell). Most of the jobs we run on those systems have phases where GPU would be a negative and all the CPU cores that you can throw at it are used. Other phases of a job are 100% GPU and the CPUs are mostly idle. (I love when "htop" shows 144 bar graphs for core activity, and all are at 100%.)

To get these jobs done quickly, huge numbers of both CPU cores and CUDA cores are the trick. (I have GTX 1080Ti in shipment now to upgrade the Titans.)

There is no "one size fits all" in GPU work. The problem space is much larger than any one of us is familiar with.
 
I think that I said exactly that, but from the opposite viewpoint - if you don't have massive parallelism, a GPU might not be a good fit. If you have data dependencies, you often have limited parallelism. If you have complex control flows, GPUs might technically be able to be used - but CPUs are faster.

And "having conditional flow control" and "having efficient conditional flow control" are two different things. A GPU might be good for streams that are mostly sequential with a small number of branches, but fail to scale for streams with lots of conditionals and dependencies.

I have one rack of systems with 72C/144T and quad Titan X (Maxwell). Most of the jobs we run on those systems have phases where GPU would be a negative and all the CPU cores that you can throw at it are used. Other phases of a job are 100% GPU and the CPUs are mostly idle. (I love when "htop" shows 144 bar graphs for core activity, and all are at 100%.)

To get these jobs done quickly, huge numbers of both CPU cores and CUDA cores are the trick. (I have GTX 1080Ti in shipment now to upgrade the Titans.)

There is no "one size fits all" in GPU work. The problem space is much larger than any one of us is familiar with.

Sounds like we're in violent agreement, yep :)
 
  • Like
Reactions: AidenShaw
I see that RX 480 is maxed out in several popular games, so no point in getting a 580 either now.

Witcher 3 4K Ultra does not run at 60 fps with 1080 Ti.

Personally I would go for the 1070, and eventually add a second, as it can run at 60 fps this way.

Let's wait and see what is the Vega offer above this level.
 
GPUs offer a staggering amount of processing power. CPU performance is still measured in the hundreds of GFLOPs, while GPUs have passed the 10 TFLOPs mark. So, you're looking at something in the ballpark of 10-30x more processing power on the GPU side of things. CPUs will never be able to compete at any parallel processing task.

I have only followed development in ray tracing. I was a bit surprised that 12c takes about the same time to render a test scene in Blender as a GTX 980. http://barefeats.com/blender.html. Do not know the validity of this test.
 
Just from Journo's duty...

c09cc9c30e9189bc40602933432b8ec5fe96bfc3d685746f28fc0833814c5b62.png

Supposedly leaked Vega 10 Time Spy score. Look at graphics score.
U0RPamY0.jpeg

GTX 1080 Ti.

Take with grain of salt, but supposedly they are leaks. Previous leak, that Manuel linked, was supposedly about small Vega performance.

And you still have to bare in mind that there are features that are not used in Vega architecture, that are key points in performance advancement: FP16, and Primitive Shaders. They require rewriting the application to use them(that is why SiSoft sandra is reporting 1:1 FP16 ratio, rather than 2x vs. FP32).

Lets wait and see what will happen soon. P.S. It appears that this is exact reason why Nvidia is rushing Volta release.
 
Just from Journo's duty...

c09cc9c30e9189bc40602933432b8ec5fe96bfc3d685746f28fc0833814c5b62.png

Supposedly leaked Vega 10 Time Spy score. Look at graphics score.
U0RPamY0.jpeg

GTX 1080 Ti.

Take with grain of salt, but supposedly they are leaks. Previous leak, that Manuel linked, was supposedly about small Vega performance.

And you still have to bare in mind that there are features that are not used in Vega architecture, that are key points in performance advancement: FP16, and Primitive Shaders. They require rewriting the application to use them(that is why SiSoft sandra is reporting 1:1 FP16 ratio, rather than 2x vs. FP32).

Lets wait and see what will happen soon. P.S. It appears that this is exact reason why Nvidia is rushing Volta release.

Thats promising. With a core clock of 1600 Mhz that puts it above 12 TFLOPS, right?
 
P.S. It appears that this is exact reason why Nvidia is rushing Volta release.

Source? You can't "rush" a GPU release by 6-12 months, i.e. if Volta is coming in Q3 2017 then they've been working towards that date for a long time now.
 
Source? You can't "rush" a GPU release by 6-12 months, i.e. if Volta is coming in Q3 2017 then they've been working towards that date for a long time now.
Volta was slated Q4 2017 for HPC, and Q1-Q2 2018 for consumer, but might come faster than that.
 
Volta was slated Q4 2017 for HPC, and Q1-Q2 2018 for consumer, but might come faster than that.
And besides that, this just shows that Vega is roughly equal to the GTX 1080 Ti. If thats the case its still somewhat of a win for Nvidia because GP102 is most likely cheaper to manufacture than Vega 10 due to its smaller die size and more conventional memory. Not to mention its been out for 9 months.
 
And besides that, this just shows that Vega is roughly equal to the GTX 1080 Ti. If thats the case its still somewhat of a win for Nvidia because GP102 is most likely cheaper to manufacture than Vega 10 due to its smaller die size and more conventional memory. Not to mention its been out for 9 months.
No. Previous benchmarks show that Vega with 687F:C1 is faster than GTX 1080 in Games(Doom Vulkan, Star Wars Battlefront). According to the Time Spy benchmark, that Manuel posted it is on par with GTX 1070. This GPU is different deviceID. 687F:C3, to be precise.

Time Spy is nowhere showing true performance of the Vega GPUs.
 
There's a KFA2 single slot 1070. You can only get RX460 with AMD for the consumer.
 
Never believe internet rumors for release dates. Remember when Vega was going to be released fall 2016?
 
Release date Q4 is different from release date 12.01. 2016.

Every company is going with release "spans". And they can still shift. Because of reasons. One of the funniest things about AMD for this example is that sometimes people are intentionally misinformed about the products and the details about them, to avoid leaks. Even Apple is not doing anything like this.

In that post from August I claimed that Vega and Zen will launch in 2016. Zen launched in March 2017. Why? They were not able to get rid of new-born child problems in time.

I am not going deeper on Volta because... Im sure you will know soon enough, why.
 
Then maybe you shouldn't trust rumors and quote them like they are fact.

Right, he's making it sound like NVIDIA is worried about Vega and is rushing to get Volta out as a response. There is no actual evidence to support this claim, but that won't stop him from posting this as if it's an undeniable fact.
 
  • Like
Reactions: tuxon86
Right, and Intel is also not coming up sooner because of Ryzen/Naples.
We're all believers.
There might be no actual "evidence" of the rush but do we believe if any of the involved parties could milk a design a bit further they would just launch something else ahead of time?
I'm counting on the usual suspects to refute with their infinite wisdom of course, we know how it goes. Maybe keep OEMs happy, since rebranding wouldn't cut it? Or maybe some other excuse? Sorry, reason.
Don't really want to pick a(nother) fight here with the usual people but come on, there's always a reason behind moving a schedule, in either direction.
I won't discuss this anymore, I foresee another long exchange of nasty posts.
 
Right, and Intel is also not coming up sooner because of Ryzen/Naples.
We're all believers.
There might be no actual "evidence" of the rush but do we believe if any of the involved parties could milk a design a bit further they would just launch something else ahead of time?
I'm counting on the usual suspects to refute with their infinite wisdom of course, we know how it goes. Maybe keep OEMs happy, since rebranding wouldn't cut it? Or maybe some other excuse? Sorry, reason.
Don't really want to pick a(nother) fight here with the usual people but come on, there's always a reason behind moving a schedule, in either direction.
I won't discuss this anymore, I foresee another long exchange of nasty posts.

What happens if Volta is twice as fast as the 1080 Ti? Perhaps NVIDIA has moved the schedule up because it smells blood in the water (i.e. Vega delayed by 6+ months) and wants to continue dominating at the high end. Maybe Vega will be able to squeak out a few wins versus the 1080 Ti, maybe it'll be faster across the board, or maybe it'll lose across the board. We still don't know, because the release keeps getting pushed further and further back. I continue to be amused at koyoot's posts claiming that AMD is coming from a position of strength here, or that NVIDIA is in trouble.
 
  • Like
Reactions: tuxon86
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.