You are definitely right, I am sorry. I rushed my answer

. It made sense to me, since they plan to implement PCI-e 4 on their new motherboards and since VII and MI60 had basically the same chip. After all, the price differentiation is a good reason to evirate the consumer card.
Apparently AMD fed Anandtech and several other tech reports bad info on the ROP number ( it is 64 not 128 ... which makes sense for a GPU targets at data centers and no displays hooked up). The issue with PCI-e v4 on new motherboards is that the number of those boards deployed about now in the PC market is about zero. By end of 2019 it will still be relatively close to zero if look at all of the active/running systems out there where it could physically fit ( if narrow on power might pop up into decent single digit percentage). They could try to spin it as a "Future proof" card but it is doubtful in the gamer space that would make any significant "game changing" difference. ( most games and apps preload data into VRAM and then run. Most active states of the game the data rate goes down low enough for x8 PCI-e v3 to handle more than reasonably well. )
The other issue is that since they have it working on "Vega 20" there is a high probability it will be working on the Navi solution coming up. Vega VII doesn't have to be their sole GPU answer for 2019. When there are far more affordable PCI-e v4 cards to sell , then that may help new boards where mainstream folks are chasing the latest tech porn.
AMD probably the Epyc "8000" series paired up with the MI50/MI60 ... not the Vega VII. More likely there are workloads there where PCI-e v4 makes a difference ( more and/or bigger datasets being moved back and forth).
I wouldn't be sure about that. Apple said and denied many things and changed its mind in a few years.
While not impossible, it is unlikely. Apple didn't say so more that they would not do desktops as much as they said they are highly focused on what they are doing. Making the best SoC possible for the iPhone is job number 1 ( and 2 and 3 ). Screw that up and the company would tank significantly. So the question is how likely is Apple going to take their eye off the ball to do a desktop. That is unlikely. Even more so now where iPhone is beginning to stall and the SoC is one of the only major differentiators they have left.
The iPad is taking iPhone "left overs". Again points to where the 'eye on the ball' is. The iPad Pro does have a derivative. It isn't iterating every year like the main iPhone process. Again points to where the "eye on the ball" is.
The t series is a "left over". ( a pruned down iPhone chip. some customized power management and some additions to run fans. ). Even still there is one and only one version they work on. ( on track to go into every Mac to reach acceptable volume. ).
The S series tuned down from iPhone levels. At 64 bits now so on track to follow tweaks to the "small core" in the iPhone ( and a bit vice versa.)
The W series ... is that really even Apple's baseline design???? Certainly it has been customized to Apple' specs but the ARM inside of a Wi-FI / Bluetooth controller .... there are a couple they could license.
For sure the next 2-3 years will see many CISC chip being installed aside ARM RISC, but I am pretty sure Apple will move to proprietary APUs on their notebook line, at least. ARM chips are less power hungry,
x86 isn't really CICS. Lots of internet forums have spun it that way but the folks who invented the term say it isn't really what they were talking about.
Many ARM chips are less power hungry because they are heavily tuned that way. Being incrementally smaller implementations allow many of them to jump onto new fab processes quicker in a more economical way but as the new fab processes arrive less frequently and cost more money , that will narrow. Not get bigger or stay constant over time.
offer higher IPC gain year-to-year,
About zero evidence that comes from the ARM instruction set or some ARM specific implementation magical property. From the article you link in later.
" ... “The ARM core significantly improves processor performance by optimizing branch prediction algorithms, increasing the number of OP units, and improving the memory subsystem architecture.” .."
Branch prediction is not a property of ARM instruction set design. Number of function units is not a property of ARM instruction set. The memory subsystem is not a property of the ARM instruction set.
What has been happening is that Apple's ARM has been adding the stuff that the Intel (and many cases the AMD) implementations already had. That isn't gains driven by the instruction set, it is just putting the functionality into the implementation. As they add this stuff to reach the "last 10-20%" of performance it is largely the same dual edge sword it is for them as it is for Intel/AMD.
If it was so "roll out of bed" easy the iPad Pro processor would still be on a yearly track. It isn't.
There is only so much IPC you can get out of von Neumann style code. At some point there is a branch and that will limit your parallelism ( unless want to start to open up security holes ).
give Apple full control on the instruction set to use
Where has Apple significantly forked off of the ARM architecture reference? They are implementing their own versions of the ARM architecture but they have not drifted off into a rogue implementation. There is close to zero benefit for Apple to go rogue. Apple has input into what new stuff goes in, but 'forked'... got proof ? [ a different cache coherency fabric is far more an implementation difference that a significant change in instruction set. ].
The vast bulk of the ARM instruction set support and optimization that is going into LLVM compiler for ARM is also applicable to Apple's implementation. There isn't a good reason for Apple to throw that away at all. That isn't what they have been doing. What they have been doing is to
save money by
not doing that.
and they just make sense, since Apple has been investing a lot to develop the Ax-Tx-Sx-Wx line and in a company optic it would not make sense to use that investment and percolate it onto the Mac line.
Again ... huge investment in Tx .... not really. Reusing the investin in Ax to redeploy as a non app performance solution for Macs ... yes. But is that really huge? Wx is that really even Apple work at the core? Adding some tweaks to bluetooth paring , some tweaks to Wi-FI direct , and perhaps deploy to not the cheapest. old process fab probably isn't a huge investment.
Besides, the ARM chips are making their way aggressively even in the server market. It's a siege!
https://www.nextplatform.com/2019/01/08/huawei-jumps-into-the-arm-server-chip-fray/
Two points you seemed to have glossed over in the article. First, the Ascend chips going from 12nm , 16TFLOP INT8 , 8W ( 2 TFLOP/W) to 7nm , 512 TFLOP INT8, 350W ( 1.46 TFLOP/W). Cranking performance into substantially higher levels isn't always linear. Some of that has to do with physical and transistor implementation. The effects have very little to do with instruction set variance. ( Ascend isn't ARM so it didn't get magical low power pixie dust ... .errrr probably not. ) bigger I/O (bandwidth) , more data to cache , etc. lead to stuff outside the instruction set implementation influences.
Second, is the general lower clock rates they are shooting for. "..Huawei reckons that the 64-core Kunpeng 920 running at 2.6 GHz ..." . This thing is "hot rodding" when it gets to 2.6GHz. The 2018 MBA Turbos to 3.6GHz ( 38% higher). The MBP 4.8GHz ( 84% higher).
If look at the chart this chip is a big winner on Hadoop ( pulling large amount of data off of spinning disks ) and "distributed fusion storage" ... again probably a very significant amount of data off of spinning disks. Apple's SoC isn't optimized for that all at. If you threw a A-series at that workload it would probably suck way worse than than the Intel/AMD implementations being compared where. Same ARM instruction set but differently tuned implementation.
Throw a iPad Pro A12X Bionic into a Retina Macbook sure. I'm not sure why wouldn't also just throw iOS (with iPad Pro optimizations ) at it too. Call it an iBook. If it has an ARM processor put the largest deployed OS that is available for ARM on it. That would fall into the exact same "reuse in different product" pattern that Apple has largely been following the last 2-3 years. That would be Apple keeping their "eye on the ball". That is at least as likely as some two headed ( ARM + x86_64 app processor) , more expensive ( no shared R&D from x86 market) , bloated binaries ( return of FAT binaries to lower capacity SSD drives ) strategy.
P.S. the other thing in that Kunpeng 920 is that they talked about better "performance / watt" but they avoided better " performance / $ ". I don't expect that it will be more expensive than the Intel/AMD offerings but it probably won't be 'dirt cheap' either. ( higher price at lower volumes than the phone chips. ). That is the other issue. Will Apple bring down the cost of Mac bill of materials with a shift to ARM at all ? If there is zero consumer facing system cost savings what do they actually get?