It's frustrating that Apple appears behind the curve, especially with the 580x, which is certainly approaching obsolescence.
The 580X is no where near being de-support , vintage , 'obsolete' status. Not being the "newest shiny" on the store shelves is not really obsolescence. Stores mostly just selling out the inventory stock and not much new being replenished isn't really obsolesce .
But Apple's cards are very far from typical consumer "reference" cards, and they likely had to work with what was available 18 months ago or more. I'm not apologizing for them, but I at least understand.
Not just "had to work with what was 18 months ago " , They also shouldn't have been even trying to work on with something in 2018 that wasn't intended to be ready in 2018 (relatively to when the project started). In other words they should not have been "betting the whole farm" on misty 2019-2020 tech for the Mac Pro. The product was grossly late (pragmatically it still is because it is still not shipping). More latch-to-and-slide, unproven tech would likely just push the system out even further ( and possibly into 2020).
AMD taped out NAVI in Oct 2018. Initial rumor was that it was good. In a matter of weeks that turned into "oops, need to go back to pre-tape out stage". And the whole roll out for NAVI products slid about 6 months. The Mac Pro didn't need any component with that kind of characteristics. The Intel CPU roll out was likely to be funky. ( and it turned out to be. )
As far as "reference card" that should be an issue if Apple assigns a reasonable number of talented folks. A half width module MPX for the 580X and 5700 would really require some "moon shot' update of work. both GGDR and most of the layout will be pretty close to the same.
Can Apple just 'paint' Apple's name on the card and slap some minor firmware tweaks on the card and ship? No. The bigger problem is more likely the drivers than the hardware. so reference or non reference doesn't particularly probably much much of a difference on that timeline at all. And yes macOS drivers are different ( apple doesn't try to closely track Microsoft or Linux).
And even if it did have a 5700, I'd likely still view it as merely an average base card for people who don't need high-performance GPU compute support.
Driving up the base component costs would only set the base system price higher. For mostly digital audio work the 5700 is overkill.
One of the factors is that need/want a half width MPX module so still have the secondary slot in the system. Pushing higher would get to slippery slope where flip to full width MPX module and would loose that.
At the higher end (Vega II), Apple is obviously focused on compute and not general purpose stuff (gaming). In that role, Vega II may not be bleeding-edge, but it's reasonably strong. Now if only more of the popular machine learning applications (TensorFlow, etc.) used AMD instead of Nvidia,
Google runs billions of inferences per day off Tensorflow with their own processors. Tensorflow was largely originally invented to work with Google's Tensor processor.
"... The chip has been specifically designed for Google's
TensorFlow framework, a symbolic math library which is used for
machine learning applications such as
neural networks ..."
https://en.wikipedia.org/wiki/Tensor_processing_unit
The notion that the world will end if Tensorflow isn't hard coupled to a Nvidia GPU is is the new cyprtocurrency like bubble. Nvidia likes it as it sells more cards for them. However, this is not because it is 'bleeding edge' and far more so because the current state of inertia has more models that have been sprikled with Nvidia hooks to fork them out of the open Tensorflow application stream. Nvidia has done "embrace , extend work" that is useful, but they are trying to fork folks off the bleeding edge.
There is other stuff that falls into the "bleeding edge" camp.
A processor about the size of a whole wafer. That would be bleeding edge.
The inherent problem with Nvidia ( or AMD) GPUs for AI work is al the circuits deadicated to not doing AI work. All long as the competing systems simply allocate that to doing more AI computations (and not drawing traingles or textures to be mapped onto surfaces or driving raster operations ) they'll likely win out over the long term in the "bleeding edge" space.
[doublepost=1567109610][/doublepost]
There is no Zen 2.5 - AMD has finished designing Zen 3 - they have started work on Zen 4.
Ryzen/TR 1000 and 2000 series were on Zen '1'. It is only the 3000 series that went to Zen 2.
Zen 3 may not be the 4000 series. TSMC has "shrink you same design library" option.
AMD supposedly finished designing NAVI back in Oct 2018 up until it wasn't actually finished. In the EPYC space "finished design" and "shipping" are more than substantively different.
The Navi stack is starting to grow - we have seen some Navi 14 benchmarks, so we should see some RX5600 and RX5500 coming for the holidays. We won't see big Navi until next year.
And even when "big Navi" arrives it may not be a replacement for Vega 20. If it is "big' because AMD slapped on a ray tracing subsystem hardware, then it won't be a replacement.
[doublepost=1567109657][/doublepost]
... The ATS podcast had a good segment on that where they had emails sent to them from Apple insiders on the Nvidia driver quality issues.
ATS podcast ?