I do think the opportunity for many core ARM chips is great. 80 core arm chips are out already. Overall, I'm Psyched!
The equation constrains are
:
1st, make an 20000$ (mean) investment now for a hardware will felt (not actually, just felt) vintage/obsolete in just 18-24 months, resale value with god luck 4000$
2nd asuming your toolchain is compatioble and benefits from macOS gifts (as no-Cuda, no-OpenCL etc)) what offer other platforms for my workflow:
-Windows: instability, virus, etc, but its the damm Hardware compatibility King, and almost every Mac App has its native windows version with rare exceptions, and I admit Win10 look pretty and polite now, but still irritant at times, ahh CUDA, OpenCL works fine there.
-Ubuntu (not to mention other linux distros): while theoretically has wide hardware compatibility, it is just an theorem waiting for solution, but if you follow th proven-the buy/implement procedure, you get the most stable, virus free operating system, the widest API availability, including CUDA, ROCm, oneAPI, and each and every DataScience/MachineLearning/HPC toolchain its not just native, also has solid support, its weaks are for traditional apps: Office, Media, CAD, while LibreOffice is almost all you need for whatever you need, and freeCAD/OpenSCAD/BLENDER are also solid 3-D , for image/video editing there's nothing even close to 2nd class mainstream... UNLESS you run these apps Virtualized inside a W10.Container or a VM ... (it works for most Autocad/AdobeCC), even you can run a hackintosh virtualized, alll day. I miss to mention as now Ubuntu Desktop is as intuitive and consitent as macOS and applications are easy to install.
Result: SAIL AWAY AS SOON AS YOU CAN
Today I "Build" (plan to order later) a Workstation/Server for compute, to run ubuntu there:
32-core Threadrpper+256GB Ram+ 4 RTX Titan liquid cooled, +4TB PCIe4 SSD: 11Grand, with only exception in memotu (I could consider later an AMD EPYC build for about 5000 extra I could put 64 cores and 2TB ram (even 2p, for 128 cores) to the configuration), and I need not to do big math to predict it will crush every Mac Pro performance benchmark (even among compatible benchemarks), simple no way to go on a Mac Pro for me.
I dont know what Apple strategy is, but not to follow any OpenCompute STD, or being open to foreign compute STD, as CUDA/oneAPI, it will restrict the macOS ecosystem to develop only for the Compute toolchain Apple considers a Mac will profiit them, so get an mac and youll get trapped developing WEB, Media, and iOS/MacOS apps, barely ML would be available (no hope for tensorflow/pytorch).
So I dont want to be a second class IT pro now by Apple design.
Maybe in 2 years I buy an iMac for office/home duties, not for work.
I dont buy the "Trend" that ARMarch will win in raw performace against amd64, maybe against 2018's amd64 cpu or, that CPU from 2-3 years ago.
The RISC efficiency mith, wjile BIG.little works well and seems will be a trend among amd64 (besides Intel theres also an Ryzen BIG.little cpu for mobile on roadmap), but RISC instruction set is a trap, while it dismisses a complex instruction decoder, and rely on Compiler for binary optimization (whichg has worked fine no way to contest it), the truth is to catch amd64 RAW performance either it need to implement higher clock speeds and/or deeper out of order execution pipelines, also to extend instruction set for specif purpose, both ruled out effciciency, if yon can live with this OK but some applications need raw performance by long execution times, test done with Cavium Thunder confirm that theory ARM cant hold performace by long run on heavy compute taks, neither get meaningful powersavings.
I seee you Apple, Ubuntu is waitng around for every pro don't need FCP.X....