so what happens if apple decides to use his in house chips and not to use intel CPU's at all?!
those are not the only choices Apple has.
1. They could split their line up. Mobile vs Desktop. which means they wouldn't drop completely.
Apple is no where near getting to equivalence to the iMac , iMac Pro , Mac Pro CPU in breadth and scope of capabilities. Maybe in some single threaded drag race on a "fit in L3 cache" tech porn benchmark, but broad spectrum replacement for large memory loads on high number of threads they don't have.
The notion that Apple is going to do a "Big Bang" , "12 months or less" replacement of the whole Mac line up any time soon doesn't have much substance at this point.
2. Apple could drop Intel and still ship x86_64 based systems ( AVX gen 1 , gen 2, gen 3 included ) .
Mac Pro 2019 can be completely useless 5 years later for example in 2025( consider you pay 40,000$ for top mac pro configuration just for 5 years)
Apple's Vintage and Obsolete policy doesn't have anything about system cost in the formula to compute time to Obsolete status. Even if Apple does keep buying Intel chips if they don't go back into Rip van Winkle mode and update the Mac Pro in 1.5-2 years in 5 years your 40-90K system will be on a countdown clock.
Apple "Big Bang" wiping out all the 100+M x86 Mac in less than 5 years would be very tough for them. Especially, since users have been on a multiple year trend of increasing the amount of time between their personal computer upgrades ( computers are getting longer cycles. Phones are relatively shorter cycles but they are
also increasing the cycle length. )
What Apple did on the 68K-PPC and PPC-x86 changes was on a much smaller installed base ( like an order of magintude or two smaller) and much faster 'native' upgrade cycles.
Similar with Windows. There may be an uptick in Windows ARM systems by the end of 2018 that gets the tech porn press all hot and bothered but the installed x86 base isn't going to collapse in a couple of years. apple has some of the issues now.
It's not fair, I think it is not the right course of action if they take it!
developers can use AVX and AVX2 and newer technology for faster processing as well as considering compatibility with older systems
If 95% of the installed base has AVX ( from v1 on up) is it really fair to hold back development for that group so some subset of that other 5% can split the development resources in half for a shrinking group ?
If it was simply a magical compiler switch on the same code to build to maximum optimzesd libraries that would be one thing. But if there is custom code hints/directives , profiling , and assembler to be tuned to create the feature then it is a substantive resource hit to have two development tracks.
So if that fair for 95% to be paying for code that doesn't use the hardware they paid for that is better? ( paying for slower code is fair how? )
Xeon X5690 in not a bad cpu by any standard and it can handle any application and heavy processing.
they've not to be completely exclusive!
if all of these new things that modern AVX2 optimized code is going to do in the near future was eminently doable on a X5690 why didn't folks do it 5 years ago? For workloads that were 'heavy' 5 years ago the X5690 works. If some folks workloads haven't changed in the last 5 years then it will work. But there were task that the x56990 couldn't do before and still can't do now. If it is in the highly vectorized code solution space the the X5690 just isn't going to keep up. Non vectorizable code sure, but that is old workloads and old code.