Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Please explain how a driver reports the fact it has fast antialised lines in OpenGL to an application? If this was an OpenGL extension, sure, the driver could just not report it. However, antialiased lines are a baseline feature from OpenGL 1.0 which came out 25 years ago.
Why do you talk about this, when I was talking about, Primitive Shaders, Draw Stream Binning Rasterization, etc, and used Antialiased Lines, as an analogy to why Vega may perform worse, in current slack of software, than it should?

No way man. I have a simple dual core Pentium that can decode 10bit 4K HEVC better on Windows than an i7 can on High Sierra. Not by just a little. I mean massive difference because macOS fails entirely.
This is actually good point, but I was talking about different things ;).
 
No way man. I have a simple dual core Pentium Skylake that can decode 10bit 4K HEVC better on Windows than an i7 Skylake can on High Sierra. Not by just a little. I mean massive difference because macOS fails entirely.
Mac drivers can perform better but have less features.
 
Edit: Let me put it another way. NVIDIA released their Pascal GPUs, and existing games got significantly faster. That's because their architecture didn't require every app developer to go and rewrite their engine to make it work well with the new GPU. If AMD is banking on every app developer rewriting their app to make it run on Vega, they've already lost.

This is exactly what I am thinking. No matter how good the hardware is, if the hardware unable to perform as expected when they launch (including due to the too advance architecture which require complete rewritten of the current software), then it's already lost. Who will rewritten everything for a GPU that has literally zero market sharing at this moment? The cost is high, and potential return is unknown. Furthermore, they know if they write something optimised for Nvidia card, the next card will automatically work better. Because Nvidia will launch the card with a good performance driver to handle the transition.
 
This is exactly what I am thinking. No matter how good the hardware is, if the hardware unable to perform as expected when they launch (including due to the too advance architecture which require complete rewritten of the current software), then it's already lost. Who will rewritten everything for a GPU that has literally zero market sharing at this moment? The cost is high, and potential return is unknown. Furthermore, they know if they write something optimised for Nvidia card, the next card will automatically work better. Because Nvidia will launch the card with a good performance driver to handle the transition.
Rewriting means: adding specific code for Vega. That is all.
 
What are GTX cards doing in a "pro" computer? I though these cards belonged to the consumer category. Why not the quadro card - price?
We don't need ECC, so why not get $700 cards that are faster than $5000 cards?

Is this computer used as a workstation or as a server?
Server for machine learning - no monitors are attached.
 
  • Like
Reactions: PortableLover
Why do you talk about this, when I was talking about, Primitive Shaders, Draw Stream Binning Rasterization, etc, and used Antialiased Lines, as an analogy to why Vega may perform worse, in current slack of software, than it should?

Because you're insisting on using SPECviewperf as the only measurement of graphics performance. Primitive shaders and tiled rendering don't matter for this test. Antialiased lines do matter, and they're either going to be fast or not.

Let's imagine that a game developer decides that they want to support primitive shaders. So, they add support for their game, which might ship in a year or two. Why on earth would anyone buy an RX Vega if it's going to take years for games that leverage the special new features which are they only way to make RX Vega run fast (according to you)?
 
Because you're insisting on using SPECviewperf as the only measurement of graphics performance. Primitive shaders and tiled rendering don't matter for this test. Antialiased lines do matter, and they're either going to be fast or not.

Let's imagine that a game developer decides that they want to support primitive shaders. So, they add support for their game, which might ship in a year or two. Why on earth would anyone buy an RX Vega if it's going to take years for games that leverage the special new features which are they only way to make RX Vega run fast (according to you)?
I was talking about Vulkan and DX12 performance, and used your example as an analogy, to the requirement of redesigning the apps to use hardware features and affect of them on the performance on the GPU through drivers.


On a side note, about Vega, I have one, but HUGE concern. How will Vega, and its features cope with Metal 2?
 
This is exactly what I am thinking. No matter how good the hardware is, if the hardware unable to perform as expected when they launch (including due to the too advance architecture which require complete rewritten of the current software), then it's already lost. Who will rewritten everything for a GPU that has literally zero market sharing at this moment? The cost is high, and potential return is unknown. Furthermore, they know if they write something optimised for Nvidia card, the next card will automatically work better. Because Nvidia will launch the card with a good performance driver to handle the transition.
This reminds me of what happened when the 2013 Mac Pro started shipping, horrible performace in PRO apps which were the primary target demographic of the Mac Pro. Which caused people to shy away from it
 
Apple should just make a nvidia model of the iMac Pro and see which one sells better :cool:
 
Also I never got my question answered, is the Vega FE's TDP too much for the cMP?

I'm not sure what the limit of the cMP is, but Vega Nano should be a 150 W to 200 W card, and Vega 56 should be around 210 W. Here is a link that lists the cards we know about.

Apple develops the Radeon drivers for OSX, so it could perform better. But if it lags behind in API level, you can blame them.

This is not true. Apple produces the frameworks (i.e. OpenGL, Metal) but the hardware vendors (AMD, Intel, NVIDIA) write the driver back-ends (i.e. the part that actually talks with their hardware).

Right, and historically graphics performance on macOS has lagged windows. Now that metal/metal 2 is here maybe there will be more parity but I haven't seen any good comparisons made.

Rewriting means: adding specific code for Vega. That is all.

If every GPU AMD makes requires very specific tuning to get optimal performance out of it, I perceive that as a weakness, not a strength. Remember back to the playstation 3, with its unique cell chip? Games that were specifically developed for that architecture could have slightly better graphics than their competitors, but most developers, especially those who weren't given lots of money by Sony to make exclusives, simply defaulted to worse graphics and shipped the game. Not every developer is going to have the time or knowledge to tune their applications and games to run slightly better, especially given that Vega based Macs will be a fraction of a fraction of their total market for the foreseeable future.

AMD's advantage is that they are in the PS4 and Xbox One, but the vast majority of these are using GCN 1 feature sets. Only the newest consoles use a limited set of Vega's features and who knows if tuning for those platforms will result in better graphics performance in macOS.
 
Right, and historically graphics performance on macOS has lagged windows.
How is OpenGL performance on both when you use the same hardware, but Apple driver on OSX and AMD driver on Windows?
 
Last edited:
I was talking about Vulkan and DX12 performance, and used your example as an analogy, to the requirement of redesigning the apps to use hardware features and affect of them on the performance on the GPU through drivers.

Okay great, let's stop using SPECviewperf as a comparison of Vega vs Pascal graphics performance then, shall we?
 
If every GPU AMD makes requires very specific tuning to get optimal performance out of it, I perceive that as a weakness, not a strength. Remember back to the playstation 3, with its unique cell chip? Games that were specifically developed for that architecture could have slightly better graphics than their competitors, but most developers, especially those who weren't given lots of money by Sony to make exclusives, simply defaulted to worse graphics and shipped the game. Not every developer is going to have the time or knowledge to tune their applications and games to run slightly better, especially given that Vega based Macs will be a fraction of a fraction of their total market for the foreseeable future.

AMD's advantage is that they are in the PS4 and Xbox One, but the vast majority of these are using GCN 1 feature sets. Only the newest consoles use a limited set of Vega's features and who knows if tuning for those platforms will result in better graphics performance in macOS.
GCN4 did not required. GCN5 requires adding specific lines of code, because it is that different, in terms of hardware features. If those lines of code will not be added, the GPUs with this architecture will behave just like GCN3, 4 did.
 
Its hard to make that comparison, since the vast majority of games on windows use directx even if they use other APIs on other platforms. Barefeats did test a few non-metal games. The results are pretty bad for macOS.
There are some multiplatform games which are only programmed in OpenGL.

You cannot say the Apple OpenGL runtime is not better than AMD's performancewise if you just compare Mac against Direct3D.
 
There are some multiplatform games which are only programmed in OpenGL.

You cannot say the Apple OpenGL runtime is not better than AMD's performance wise if you just compare Mac against Direct3D.

That doesn't make sense and I didn't make that claim. Apple implements OpenGL in macOS and AMD writes the drivers, i.e. what openGL uses to talk to the hardware. The performance of OpenGL in macOS is most likely worse than in linux and windows, since macOS is several versions behind and now the OpenGL consortium has moved on to Vulkan while Apple is only supporting Metal.
 
That doesn't make sense and I didn't make that claim. Apple implements OpenGL in macOS and AMD writes the drivers, i.e. what openGL uses to talk to the hardware. The performance of OpenGL in macOS is most likely worse than in linux and windows, since macOS is several versions behind and now the OpenGL consortium has moved on to Vulkan while Apple is only supporting Metal.
Implementing OpenGL is very complex. You cannot assume that the part AMD writes will decide that the card will not run faster on Mac under it.

That the Mac is behind in API level does not mean that what is supported does not run faster.

Khronos has not moved on. Vulkan is a complementary API.

How is Apple only supporting Metal? That would be suicidal.
 
That the Mac is behind in API level does not mean that what is supported does not run faster.

For a lot of games it means they haven't come to the Mac at all. There has been this no man's land between when Apple stopped updating openGL and had yet to start supporting metal where macOS lost some game support. Frontier development stopped supporting Elite dangerous when it couldn't do compute shaders, Overwatch is the first blizzard game not on a Mac, etc.

How is Apple only supporting Metal? That would be suicidal.

Apple wants a graphics API they have control over, unlike Vulkan. Hopefully Metal 2 is able to bring more games to the Mac with better performance, especially since they can leverage all the game engines that have iOS support. In fact right now their are a surprising number of metal games, many more than Vulkan and probably competitive with DX12.

My theory is that metal is partially keeping Apple locked to AMD, in that their hardware runs it better. Many newer macOS games state they only support AMD GPUs.
 
And this is where the Mac becomes a niche, like with Quartz and Objective C.

any more niche than DX12?

maybe vulkan is the one that's about to become a niche... year of linux and all
 
I said the Mac is a niche, not the iPhone.

So, the Mac will get iOS games. Windows will get Xbox games.

More games are using Vulkan than DX12.

As long as game engine support metal, its easy enough to bring them to the Mac. Currently some of the biggest game engines, including Unreal and Unity, support metal on the Mac. Judging by the list of DX12 games, there are more games out for Metal in macOS than DX12 for windows.
 
  • Like
Reactions: mwb
As long as game engine support metal, its easy enough to bring them to the Mac. Currently some of the biggest game engines, including Unreal and Unity, support metal on the Mac. Judging by the list of DX12 games, there are more games out for Metal in macOS than DX12 for windows.
LOL.

On that page look at:

See also[edit]
  • 42px-WPVG_icon_2016.svg.png
    Video games portal
Also, the dates are rather suspicious - I doubt that it's a full list of DX9/10/11 titles that have been updated for DX12.

And the Vulcan list is very short.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.