People here ranting about mp 7,1 and 8,1 mostly driven by Gurman's rumour (lacking actual cues).
The 7,1 still have long life ahead, not just at updating legacy x86 ecosystem apps, but also as content/development on things not app related as 3D, audio video, even AI/ML as Metal 3 support tensorflow and pytorch which are front-end agnostics as long it's backend is supported and performant, so besides performance doing ML on the Mac Pro 7,1 and 8,1 only will differ on GPU performance (assuming new PCIe GPU won't be back compatible with 7,1).
I have no doubts on apple being capable to deliver an quad m2-max rig, just curious about the selected solution to arrange 4 m2 either linear (like a domino), S overlapped, or my guessed as in an + sharing an octagon-shaped quad bridge or interposer.
We could safe bet it's top CPU configuration will be 4x m2 Max, and max GPU performing below 60tflop fp32 (close to single Rx 7900xtx).
Having concerns on how much and kind of RAM upgradeability I'm convinced technically Apple has no barriers than marketing on providing it with support for discreet RAM modules about its format DIMM, SO-DIMM are the popular choices but Apple don't care on being popular so while I consider the Mac Pro 8,1 RAM to be upgradeable, doesn't means you could buy an barebones Mac Pro and at the same time order at amazing a bunch of cheap ram sticks to evade Apple tax, I doubt Apple to offer only soldered ram with the "unified memory" moto, Pro's (actual pro not gamers) don't buy this bullsh1t as it's just an fancy name for old Shared Memory, also impractical for sales as RAM requirements for Pro workflow vary tremendously, there's no 4 ram sizes fit everyone interested on the Mac Pro.
Across the industry is well know apple to support new non apple GPUs, granted AMD but don't discard Intel's ARC at sometime, even maybe Nvidia but this really requires an miracle.
I can imagine an Rx 7900xtx duo PCIe 5 MPX module, two of these added to quad m2 iGpu performance would peak near 300 Tflop/s shutting mouths as the most powerful sub 1.5kw workstation (also supporting HW accelerated raytracing). Do you think John Ternus will be ashamed?
But I think it's necessary another Big "one more thing...'
Apple own specific ASi GPU or compute card, most GPU are really fast doing general highly parallel calculations on float or integers but are quite deficient at Tensors, an pure ASi TPU/NPU compute module would obliterate even the new Nvidia Hopper H100 on where Nvidia is weak: as on tensor processor' IP Nvidia actually don't have an edge, even Apple may have outsourced it to Jim Keller and going under the radar.
Now imagine Jhon Ternus showing that 'one more thing...' a single 200W TPU/NPU accelerator capable to dethrone Nvidia's still to release H100 GPU at machine learning training?
Hope that ASi TPU NPU not happening outside Dreamland otherwise by first time we will see rows at Apple stores on people wanting an ASi Mac Pro, but not everyone but most AI/ML CEO/CTO would be the ones linings.