....The Vega II Duo looks great on paper, but it sounds like the tech will be superseded by AMD's Navi 23 due in the middle of 2020.
There is little to indicate so far that Navi 23 is really the successor to the Vega 20 in the Vega II. Arcturus is probably the successor to Vega 20.
https://www.tweaktown.com/news/68074/amds-next-gen-arcturus-gpu-teased-here-1h-2020/index.html
It is basically a Vega 20 moved down to a 7nm+ fab. ( It is a shrink of the 7nm design. *** ) . Apple driver rollout will probably be behind the Linux one for the Instinct roll out so around October 2020 plus or minus 2-3 months for an Apple version.
Navi 23 isn't aimed at the same market according to most of the rumor info I've seen. It sacrifices some general computation compute area to chase after having "real time" ray tracking compute. That is why it is being labeled an "Nvidia Killer" ( likely more so a Nvidia feature list matching check box than killer. ).
A proprietary "real time ray trace" feature that Apple doesn't have in their own GPU implementation probably isn't going to be top priority for Metal. I wouldn't "bet the farm" that Apple is going to do that in 2020. Apple will need to do something. Intel is going to have one too. Apple may put one in also ( in chase to AR headset and AR capable phones at lower power draws. ). This write rendering driver work will increase their skill set so can make a better solution that spans hardware implementations.
But the more solid foundation that Apple will have would be an Arcturus update that could just computationally grunt through current ( composed 2018- early 2020 workloads ) Metal workloads faster.
Look at Nvidia RTX rolling out into other apps. It has taken Nvidia substantive time to expand to apps. Metal roll out would probably be longer as Nvidia hooked the software/hardware R&D together from the get go. And Apple software and AMD hardware would be a split owner situation. ( that is just not likely to move quite as fast. ).
I'd expect Apple to work with AMD to do some drivers that are related to Navi 2x family, but I would
not expect that those drivers would cover the "ray tracing unit" in a 2020 time frame. So it wouldn't be particularly useful as a MPX module. But folks may be able to buy a card that isn't really a "Vega 20" killer. in terms of computation. ( viewport raster speed , some gaming , yes. But the bulk of what Apple is aiming the Vega II Duo at....... I doubt it. A decent chunk of silicon is going to be assigned to something else than general compute.)
The Radeon VII is already end-of-lifed,
It isn't really "end of lifed" . It is just end of production. Which when you take into account that the Vega II models use
exactly the same base die. means it is really no where near literally "end of life". AMD can only make so many of these dies. If Apple is buying a bucket load of them there be any left after those dies have been allocated to Vega II and MI Instinct 50/60 products. So AMD stopped making what they don't have a supply for. ( all the more becaue the VII was probably being sold about at-cost or relatively very low margins. In part, it was a filler because the Vega II hadn't rolled out in volume. Plus Navi was stumbled on timeline. )
although the RX 5700 XT is very comparable in compute tests (and Vega 64 still holds its own!) Be interesting to see what GPU BTO options there are when Apple finally decides to release them.
Navi's design was somewhat skewed toward gaming. ( as the major 'sponsors' were game consoles and AMD wanted to attack the "affordable gaming" range of the add in card market. ). If Apple does an trade of an Navi for the 580X, I highly doubt it will be a 5700 XT variant. It is an entry card which means cost effectiveness is a parameter. The XT is clocked up out of the thermal spot and has a higher prices... neither of those is a winner for a half-height, more affordable, MPX module.
*** P.S. since the Vega 20 was first implemented on 7nm it does make some sense to do shrink for the design (with some clean ups and a narrow set of feature updates. maybe a CU count bump.). It is way more cost effective than trying to do a new ground up design in terms of money and personnel resources ( both of which AMD doesn't have an over supply of). Nvidia's high end computational die that take up maximum reticule size have issues getting to 7nm which shrinks max reticules as bit. If AMD can do a 7nm+ that competes better on $/performance in the higher end space they can probably sell more than just a few of these. ( Apple is likely pushing them on pricing also. )