Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Nobody currently cares about OpenGL.

Everybody has moved on to modern APIs.
That is BS. Not everybody programs for proprietary APIs.

And developing for Vulkan and DX12 is very complex and expensive (unless you're just using a third party engine).
 
That is BS. Not everybody programs for proprietary APIs.

And developing for Vulkan and DX12 is very complex and expensive.
It is unfortunately inevitable, still.

OpenGL Next is Vulkan. Vulkan will also be the only API that will combine graphics and Compute at the same time, on all of the platforms, and this alone will make Vulkan go-to API for all vendors.
 
It is unfortunately inevitable, still.

OpenGL Next is Vulkan. Vulkan will also be the only API that will combine graphics and Compute at the same time, on all of the platforms, and this alone will make Vulkan go-to API for all vendors.
It is not inevitable because many people just can't afford it.

It is out of place for graphics hardware vendors to place the burden of drivers on application developers and it would make for a lot of poorly optimized programs.
 
It is not inevitable because many people just can't afford it.

It is out of place for graphics hardware vendors to place the burden of drivers on application developers and it would make for a lot of poorly optimized programs.
What they can't afford? Currently it will be much more expensive to develop software for dead platform, than it will be for platform that is on its rise.

Poorly optimized programs? Have you seen latest iteration of Doom franchise? The most optimized game that has ever been, that does not lack graphics fidelity and details.
 
What they can't afford it? Currently it will be much more expensive to develop software for dead platform, than it will be for platform that is on its rise.

Poorly optimized programs? Have you seen latest iteration of Doom franchise? The most optimized game that has ever been, that does not lack graphics fidelity and details.
OpenGL is not a dead platform. DX11 might eventually be because it is a Microsoft gaming API.

Not everybody has the capability of id Software to properly optimize their Vulkan programs.
 
OpenGL is not a dead platform. DX11 might eventually be because it is a Microsoft gaming API.

Not everybody has the capability of id Software to properly optimize their Vulkan programs.
Why do you believe that it is SO MUCH harder to develop Vulkan applications, than OpenGL? :)
 
Why do you believe that it is SO MUCH harder to develop Vulkan applications, than OpenGL? :)
Because the OpenGL driver takes care of a lot of stuff that Vulkan just leaves up to the application programmer.

It is not about abandoning OpenGL, but about making OpenGL more multithreaded.
 
Because the OpenGL driver takes care of a lot of stuff that Vulkan just leaves up to the application programmer.

It is not about abandoning OpenGL, but about making OpenGL more multithreaded.
Driver leaves a lot of control this way, out of developers. They have been asking for control over hardware, and this is what they have got.
 
But the graphics workloads in professional applications are run on compute kernels, not geometry kernels.

Koyoot, let me help you out with Apple's intended use of the Vega chip in the iMac Pro.

When we considered how much we wanted this iMac to be capable of, it was clear that only one graphics chip would do — but that chip didn’t exist yet. So iMac Pro is debuting a new one. The Radeon Pro Vega is over three times faster than any previous iMac GPU, packing the power of a double-wide graphics card into a single chip. All of which translates to higher frame rates for VR, real-time 3D rendering, more lifelike special effects, and gameplay at max settings. It’s one huge reason iMac Pro is power incarnate.

So please stop with this crap that no one should care about Vega's poor graphics performance. Its literally being marketed by Apple for its graphics capabilities.

The 295W GPU is on the level of performance of GTX 1080, at least according to AMD, in current state of drivers in gaming, and still will be faster than Titan Xp in compute oriented applications, which I think is most important from professionals.

This all depends on what your professional application is. If you are working with VR, you are far better off with Nvidia. Machine learning, despite all the marketing by AMD, I have not seen a single instance of a benchmark or other demonstration outside of AMD showing Vega FE doing machine learning. Every single popular library supports CUDA, not opencl. Even the "professional" benchmarks we have seen for Vega FE, at best it trades blows with the Titan XP/1080 Ti.

Im sometimes dumbfounded about level of knowledge on this forum.

:rolleyes:
 
  • Like
Reactions: BB8 and tuxon86
Driver leaves a lot of control this way, out of developers. They have been asking for control over hardware, and this is what they have got.
Big studios have been asking for control over hardware, not indies or engineering/research developers.
 
Koyoot, let me help you out with Apple's intended use of the Vega chip in the iMac Pro.



So please stop with this crap that no one should care about Vega's poor graphics performance. Its literally being marketed by Apple for its graphics capabilities.



This all depends on what your professional application is. If you are working with VR, you are far better off with Nvidia. Machine learning, despite all the marketing by AMD, I have not seen a single instance of a benchmark or other demonstration outside of AMD showing Vega FE doing machine learning. Every single popular library supports CUDA, not opencl. Even the "professional" benchmarks we have seen for Vega FE, at best it trades blows with the Titan XP/1080 Ti.



:rolleyes:
Soooo...

You call architecture a failure because the software is not ready for it? You call hardware not useful because there has not been software(so far), for this GPU?

What will happen when RX Vega with properly optimized software(Gaming) turn out to be for example 20% faster than Titan Xp? What will then be a failure?

Reserve your judgements over the hardware, till the software matured.

Big studios have been asking for control over hardware, not indies or engineering/research developers.
Im sure they will adapt to the situation.
 
You call architecture a failure because the software is not ready for it?

no, we're calling the product a failure. a good product can't just be good architecture. if amd's software division is a bottleneck, that's still a problem.

What will happen when RX Vega with properly optimized software(Gaming) turn out to be for example 20% faster than Titan Xp? What will then be a failure?

that's optimistic.

from what I've read vega won't see more than 10% improvement with driver optimizations, which is not nearly enough.
 
no, we're calling the product a failure. a good product can't just be good architecture.

from what I've read vega won't see more than 10% improvement with driver optimizations, which is not nearly enough.
It is not only drivers. Your software has to be rewritten to utilize the hardware features: Primitive Shaders for example to bring performance uplift.

What is Primitive Shaders? They allow to cull not used shader parts out of the pipeline(those, that are not visible to the observer), and also allow for increased (up to 2.5 times, according to AMD, but realistically we are talking about two times higher uplift) Geometry throughput.

Gaming performance relies on Geometry throughput. Those features are available only in DX12 and Vulkan APIs. Don't expect the GPU to see massive increases in DX11.
 
Here is a summary of Koyoot's argument:

When Vega FE has poor gaming performance, "Vega FE is not a gaming product!" -> Then when Vega RX mirrors Vega FE's performance, "No one should care about gaming!" -> Then when we point out that graphics workloads are important, "The performance will be better in the future!"

Sure dude. I can't wait to see how this argument keeps evolving.

No one is going to look at the mediocre results of the last few AMD GPUs and think that its going to magically get better in the future. You can cite all the architectural nuances you want, but when RX Vega's reviews hit and directly compare it to Nvidia's offerings, that is what consumers and professionals will make their decisions on.
 
Soooo...

You call architecture a failure because the software is not ready for it? You call hardware not useful because there has not been software(so far), for this GPU?

What will happen when RX Vega with properly optimized software(Gaming) turn out to be for example 20% faster than Titan Xp? What will then be a failure?

Reserve your judgements over the hardware, till the software matured.

Im sure they will adapt to the situation.

IMO, as a GPU manufacture, yes, that means a failure.

The logic is simple, buyers are buying the function of the card, not just the hardware.

If Ferrari make a ultra nice high performance car (hardware), but the driving system (software) unable to release its power, or even worse, unable to control the car properly, that's a failure, no matter how good the hardware is. The positive side, that should be fixable by software upgrade, however, until the software is there. That piece of hardware is functionless.

Same for smartphone, computer, GPU, and in fact, most of modern device. If the software is not ready at launch, that product (the whole product, not just the hardware) is a failure (at launch, at least). In technology point of view, the hardware itself can be a piece of art, but for most consumers, that means nothing, because it's useless. Create a very advance architecture GPU that no one has software for it (including their own driver cannot utilise the hardware) doesn't mean that they are good, but just unrealistic.

We can make a fastest aircraft on the world that will crash without a proper fly by wire system. Is that an advance hardware or a piece of rubbish? Both!

Driver leaves a lot of control this way, out of developers. They have been asking for control over hardware, and this is what they have got.

This sounds very good, let the users take control. But again, use super car as example. Take out all advance software assist, but marketing the car as "completely controlled manually by the driver" doesn't sounds impress, isn't it?

AMD can allow programmer to take over the low level stuff, however, that should be optional, not compulsory. If a GPU that cannot be utilise by the current software. And all software must be rewritten to release the GPU's power. That doesn't sounds like a good option for most developer. How about the next gen GPU? rewritten everything again? AMD is the manufacture provide this hardware, they should provide an easy way to utilise the hardware as well (like CUDA).

Allow programmer to control the low level stuff is just an excuse for them to not provide any proper software. When Nvidia selling fully automatic guided missile. AMD marketing they bomb can be more powerful (if you can hit the target), but you have to drop the bomb manually, and you have to do all the calculation / planing about how and where to drop the bomb. No matter how powerful the bomb is, that doesn't sounds impressive to me.
 
  • Like
Reactions: Flint Ironstag
Here is a summary of Koyoot's argument:

When Vega FE has poor gaming performance, "Vega FE is not a gaming product!" -> Then when Vega RX mirrors Vega FE's performance, "No one should care about gaming!" -> Then when we point out that graphics workloads are important, "The performance will be better in the future!"

Sure dude. I can't wait to see how this argument keeps evolving.

No one is going to look at the mediocre results of the last few AMD GPUs and think that its going to magically get better in the future. You can cite all the architectural nuances you want, but when RX Vega's reviews hit and directly compare it to Nvidia's offerings, that is what consumers and professionals will make their decisions on.
Its funny that you can spin my words this way, when I was saying that software was immature for Vega FE.

I am not diminishing anything. Why you refuse to see the other side, as well, apart from only yours? I am just trying to educate you. That it is not just black or white.

So you admit, that you, as a professional choice of the GPU base on gaming performance?

Thank you. I have no more questions, then.

This sounds very good, let the users take control. But again, use super car as example. Take out all advance software assist, but marketing the car as "completely controlled manually by the driver" doesn't sounds impress, isn't it?

AMD can allow programmer to take over the low level stuff, however, that should be optional, not compulsory. If a GPU that cannot be utilise by the current software. And all software must be rewritten to release the GPU's power. That doesn't sounds like a good option for most developer. How about the next gen GPU? rewritten everything again? AMD is the manufacture provide this hardware, they should provide an easy way to utilise the hardware as well (like CUDA).

Allow programmer to control the low level stuff is just an excuse for them to not provide any proper software. When Nvidia selling fully automatic guided missile. AMD marketing they bomb can be more powerful (if you can hit the target), but you have to drop the bomb manually, and you have to do all the calculation / planing about how and where to drop the bomb. No matter how powerful the bomb is, that doesn't sounds impressive to me.
This is bad analogy. Nvidia has to use the same rules that are in Vulkan, as AMD has to. Its developers who control the hardware performance in Vulkan and DX12, not IEMs.

It appears that you do not understand what I am writing. Drivers can only show features of the hardware to the application, but it is Developers choice, whether they want to use it, or not, in their game.

Primitive Shaders is the feature that allows to increase two times the Geometry Throughput, and increase GPU performance. It is only available in Vulkan and DX12, as a feature. So this will not appear in DX11 games. Regardless of this, this feature HAS TO BE implemented by Developer, for the hardware to utilize it. Only thing that AMD can do on driver level - it is make it visible to the developer, and eventually to the application, but all work is done by developers in DX12 and Vulkan. Nvidia was bale to gain some performance in Vulkan and DX12 titles, because they enabled in future releases of drivers features that were in GPUs, were implemented in the game engines, but were not visible to them, because they were not enabled in drivers.

DX11 and OpenGL - all performance was in hands of IEMs: Intel, Nvidia, AMD.
 
Last edited:
I base my professional choice on my specific needs. For me this is real-time physics simulations in game engines and machine learning. Nvidia is the better choice here.
What is the reason why Nvidia is better for your needs?
 
I am not diminishing anything. Why you refuse to see the other side, as well, apart from only yours?

So you admit, that you, as a professional choice of the GPU base on gaming performance?

Thank you. I have no more questions, then.

I keep following this post. I am not going to start (or join) a war or anything, but just want to express my point of view.

I am sure quite a lots of Mac Pro users are also professional, may be not professional in your area, but still professional in computer / software / multimedia industry...

If they care about gaming performance, than it means gaming performance is important to them. And I don't know how can you representing them, and saying gaming performance is not important for all professional. Since when you can represent ALL professional?

If they say graphic performance is important to them, than it's important to them. It's that simple.

How about a gaming developer need a GPU to test their game? Are you going to say gaming developer is not a professional at all? How about an architect using VR to design a building? An architect is not a professional? How about I use my computer as a flight sim? I, as a professional pilot, is not a professional?

Professional has it's own definition, but not defined by you. And I don't know why professional must only care about compute performance. You are the person who only care the potential compute performance, not all professional.

I must say that I learn a lot from your post, especially about GPU architecture. However, I can't agree that AMD GPU is comparable to Nvidia at this moment. Yes, may be Vega is comparable to Titan Xp in some specific area (compute). However, at most, it only win a little bit. But in other areas (gaming, graphic performance, software developing difficulty, power efficiency, etc), a big lost. Which make it clearly not the same level product.

I know your original post is not addressing to me. But I am not refuse to see the other side. In fact, what I see is that most people comparing thing in all area. And you are the person that only see thing on your side. Why only compute matter? Why gaming benchmark cannot be important? Why graphic performance is meaningless for all professional? Why those few compute performance should override everything? I just couldn't understand.
 
I keep following this post. I am not going to start (or join) a war or anything, but just want to express my point of view.

I am sure quite a lots of Mac Pro users are also professional, may be not professional in your area, but still professional in computer / software / multimedia industry...

If they care about gaming performance, than it means gaming performance is important to them. And I don't know how can you representing them, and saying gaming performance is not important for all professional. Since when you can represent ALL professional?

If they say graphic performance is important to them, than it's important to them. It's that simple.

How about a gaming developer need a GPU to test their game? Are you going to say gaming developer is not a professional at all? How about an architect using VR to design a building? An architect is not a professional? How about I use my computer as a flight sim? I, as a professional pilot, is not a professional?

Professional has it's own definition, but not defined by you. And I don't know why professional must only care about compute performance. You are the person who only care the potential compute performance, not all professional.

I must say that I learn a lot from your post, especially about GPU architecture. However, I can't agree that AMD GPU is comparable to Nvidia at this moment. Yes, may be Vega is comparable to Titan Xp in some specific area (compute). However, at most, it only win a little bit. But in other areas (gaming, graphic performance, software developing difficulty, power efficiency, etc), a big lost. Which make it clearly not the same level product.

I know your original post is not addressing to me. But I am not refuse to see the other side. In fact, what I see is that most people comparing thing in all area. And you are the person that only see thing on your side. Why only compute matter? Why gaming benchmark cannot be important? Why graphic performance is meaningless for all professional? Why those few compute performance should override everything? I just couldn't understand.
Then let me express what I am seeing.

"Vega is on par or faster in gaming but uses quite a lot more power than GTX 1080 - rubbish GPU"
"Vega is faster or on par with Titan Xp in compute - I don't care, it is not a gaming card, its a failure".

Those two are just analogies.

Has anybody considered two things: that drivers, and BIOS of Vega FE were not ready? Has anyone considered that Vega features are not implemented, or visible in the applications? Has anyone considered, that those features are heavily important for this architecture to start flying? And yes, I am talking about gaming features. Vega brought ONLY gaming features, to the table. The only one appears to be working is Draw Stream Binning Rasterizer(in Vega RX, Vega FE had it disabled).

Why do you believe that I am saying that gaming is not important.

What I am constantly saying: wait with your judgments about this particular hardware to the moment when software will mature.

Vega in DX11 will unfortunately behave just like GTX 1080. There is nothing AMD can do here. I have to wait and see the effect of Primitive Shaders implementation in game engines, to draw conclusion with what GPU it will compete, in future games in DX12 and Vulkan, but I think it will easily tie with Titan Xp.
 
Since we seem to be obsessed about compute performance, don't forget that this is coming very soon:

http://www.anandtech.com/show/11559...ces-pcie-tesla-v100-available-later-this-year

30 TFLOPs of FP16, 15 TFLOPs of FP32 raw computing power. Oh, and 120 TFLOPs for deep learning, just as icing on the cake. Given how late Vega is, that's the real competition at the high end, and Vega has come up short.

I was going to respond to the "graphics workloads in professional applications are actually compute" comment, but it's so laughably false that I won't bother. If you are rendering triangles, then it's a graphics pipeline. If you are running a compute kernel, then it's compute. This is not rocket science.
 
  • Like
Reactions: xsmi123
I was going to respond to the "graphics workloads in professional applications are actually compute" comment, but it's so laughably false that I won't bother. If you are rendering triangles, then it's a graphics pipeline. If you are running a compute kernel, then it's compute. This is not rocket science.
If it is Graphics workload, and in this workload Vega is faster than Titan Xp, why it does not translate into gaming performance?
 
Then let me express what I am seeing.

"Vega is on par or faster in gaming but uses quite a lot more power - rubbish"
"Vega is faster or on par with Titan Xp in compute - I don't care, it is not a gaming card, its a failure".

This is not what I perceive people are saying. I think the sentiment is that AMD has been absent from the high end market for over a year. Vega is supposed to be a return to that segment. Now that Vega is here, the product we are getting, while reasonable in compute performance, is relatively poor in graphics performance given 1. the size of the GPU die and 2. the power consumption of the chip. AMD delivered us a GPU the size of a Titan XP but higher power consumption, but the graphics performance is that of the much smaller and much more efficient GTX 1080. You can claim that performance will magically get better over time as developers take advantage of a vendors specific GPU features, but that applies to all GPUs, not just AMD's.

It does better in compute performance, but I haven't seen any results that made me think it was markedly better than any of Nvidia's offerings. Only that depending on a specific task, one or the other could be better.

Vega in DX11 will unfortunately behave just like GTX 1080. There is nothing AMD can do here. I have to wait and see the effect of Primitive Shaders implementation in game engines, to draw conclusion with what GPU it will compete, in future games in DX12 and Vulkan, but I think it will easily tie with Titan Xp.

You can claim its all software, but AMD is not doing much better in DX12/Vulkan performance. Even in AMD's marketing material, they show that the GTX 1080 averages a higher frame rate in their hand selected DX12 and Vulkan games.

As mac users we want our machines to have the best GPUs possible, and lately it seems that AMD just can't keep up with onslaught of Nvidia's constant releases. Nvidia has the luxury of releasing high end compute focused chips like the P100/V100, but also the consumer focused GP102, GP104, GP106, etc. AMD has to try and cover all their bases with only 1 to 2 GPUs a year.

We mac users are stuck with AMD, and we are starting to feel like suckers as our PC counterparts get to enjoy the best GPUs the market has to offer when they are released. Apple only has the AMD's RX 480/580, which is solidly midrange and won't get a high end chip until the iMac Pro with Vega this winter. But meanwhile PCs have had the 9 TFLOP GTX 1080 and 11 TFLOP Titan X for a year now. That makes Vega's lackluster graphics performance that much harder to swallow, since its going to be awhile before AMD has anything better. Meanwhile Nvidia's consumer volta chips are on the horizon.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.