What are the current consensus predictions for chipset models/no. high power and low power cores/clockspeeds/GPU/memory? [I figure you've probably been able to keep better track of this than I have.]
I don't know about the consensus, but the rumors point to a 4+4 CPU (A14X) and a (8+4) CPU (A14T?) config. No mention of GPU specs. If we consider the historic precedence and the common sense, the A14X would have 7 or 8 GPU cores and the "A14T" — who knows — maybe 12 or 16? We'll learn soon enough
Neither Intel or AMD stay within their TDP under load, and haven't for years. Just getting an SoC for the MBP that actually stayed under 45W would be a huge win. My 16" loves to spike up over 90W just for the CPU package, and will frequently sit above 60W under load.
Spiking above the TDP is not a problem — TDP is just a figure for long term sustained thermal dissipation anyway, and it depends on what a vendor means in it (Apple SoC's don't publish or discuss TDP in any sort or fashion anyway). The problem with Intel is that it has to do these absolutely ridiculous power spikes to offer good performance.
I don't expect them tomorrow, but eventually multi-processor machines. If they threw two A14s in there, that would be 4 high-performance cores (and 8 high-efficiency cores) total. Throw in four A14s and that's 8 high-performance cores total. (But it probably won't be A14s because they would have to be modified to support multi-processor use, though I guess it's possible they are already designed for it. But that would mean wasted transistors in all those iPhone 12s and iPad Airs that have only a single A14.)
As others have already pointed out, that is a sure way to kill performance. Multi-processor works fine for cluster-style machines that specialize in running multiple (independent) tasks in parallel, such as web servers or scientific supercomputers. They don't work for general purpose computing. There is NUMA as pointed out by @casperes1996 , which is used for large-RAM machines (I programmed a crappy large data algorithm for our 4TB RAM supercomputer, interesting experience), but it's certainly not a general purpose programming domain. Incidentally, I think that Apple is likely to adopt a NUMA-style architecture for their Mac Pro, but for a different reason than you state. I think some sort of NUMA architecture is inevitable if they want to keep unified memory for a pro level workstation.
Sounds to me like a rather uninformed article. They utterly fail to notice that Apple has ben developing their own chips for years — and that these chips are significantly ahead from the curve in terms of energy required to deliver the same performance. They also seem to have no clue about what a software transition like this entails and how porting from x86 to ARM works. It's ok to be skeptical — it is an investor-oriented article anyway, but at least get your facts straight.
But the story is vastly different if your critical software is written in K&R style C with manual pointer arithmetic, making assumptions about memory offsets and alignments, and using compiler intrinsics for AVX instructions, which btw are not translated by Rosetta.
Though I will caveat this by saying that if you code C in a good style using functions like sizeof instead of assuming the memory size of types and such you're not in very much trouble there either - so it is only potentially painful in an extremely small set of situations. Just want to point out it isn't always just hitting build
Memory offsets, alignments etc. are identical between x86-64 and Aarch64 — ARM developed the entire 64-bit instruction set with a lot of foresight (and I have read some rumors that Apple apparently had a hand in it). Low-level compiler intrinsics sounds like a problem, until you discover thing like these. Overall, if your code is correct C/C++, it will run on Aarch64. I mean, didn't Apple mention that it took a single guy under a week to port Photoshop — the stereotype of legacy software —to Apple Silicon? In other news, a single person with a DTK, working in their spare time, made Zig compatible with Apple Silicon targets.
There will be roadblocks of course. If you have inline assembly, well, you are mostly out of luck. If you have hidden hardware assumptions (page size, hardware register granularity, using CPUID trick to synchronize CPU barriers etc.), you are pretty much screwed. The biggest pitfall is multi-threaded programming, because x86 and Aarch64 have explicitly different memory ordering guarantees. But in even in the later case, if your code is correct C/C++, it will correctly work on Aarch64 — the only problem is that many people ship buggy code without realizing it, since the bugs won't show on x86. To be fair though, this will mostly affect low-level threading libraries and things like allocators — and popular stuff is already tested and packaged for ARM CPUs.
Apple already sucker-punched devs with the whole codesigning debacle. Which followed the hard cutoff forcing upgrade forcing apps to 64-bit. Which followed a laundry list of other requirements that'd make this post too long... Developing for macOS is now a big pain in the butt for small-to-medium devs. And now they'll be asking developers to support not just a new architecture, but a new one plus the previous one.
Really don't share your opinion. Developing for Apple is much simpler than developing for any other platform. Forcing the 64-bit was painful if you have to maintain badly designed legacy software, but it makes things so much better for everyone in the long run, that it had to be done. Codesigning is literally one command, and if you se Xcode, it will take care for you instead. if you don't use Xcode, the linker ill do it for you in the background (you only need to deal with it if you intend to distribute). Finally, if your program is competently engendered engineered (which is really not that hard to do), it will support both x86-64 and AArch64 as compilation targets.
I agree however that the requirement of being enrolled in the Developer Program sucks for open source, Apple really ought to have free access to codesign identities for confirmed individuals.
IMO you can't really sell people on thermals. "Look how much cooler the chassis is!", while a valid praise in an enthusiast review that nerds like some of us pay attention to, does not seem like the kind of thing you'd put in a keynote.
That leaves battery life and processing performance. They'll both need to be awesome, or the rumors will need to be decoys and form factors are changing which is unlikely, or otherwise Apple will be in trouble. AND lets not forget the laptops rumored to be updated are ones that would normally get Tiger Lake, which itself is finally a good upgrade for the ultrabook class. Otherwise why not wait another 6 months?
I think you are missing the fact that already the 4-5 watt dual-core iPhone CPU has performance comparable to that of a 15W Tiger Lake. A slightly higher clocked auld-core A14 will have single-core performance on par — or better — than AMD's newly released desktop CPUs and the multi-threaded performance close to that of an Intel i9 in the 16" model. If that is not something to woo enthusiasts, I don't know what will. And of course, let's not forget about the GPUs. The iPhone 12(!!) offers similar performance to Nvidia's MX350 (a 25 watt Pascal GPU), so you can expect entry-level Mac laptops with 8 GPU clusters to outperform an Nvidia MX450.
Personally, I desperately need more performance. Apple Silicon will make Macs the choice for everyone like me — people who need state of the art performance in a mobile package. Heck, I am contemplating switching my 16" i9 for a 13" with A14X, because it will mostly likely run my R code much faster.