I'm 99% sure that the 2-fan iMac were supposed to get an M1x which for some reason didn't make the cut. Would also have turned the 2 non TB USB-C into TB.
That would be one possible explanation. It is curious that they offered double the fan capacity for the models that have just one extra GPU core.
It is possible that Apple plans to put a bigger chip (or that the future entry-level chips will scale better with available thermal headroom) in the two-fan iMac in the future. It is also very much possible possible that this is just the usual Apple marketing fanfics to make people choose the second-tier model.
Instead, responsible businesses *understand* that many changes *can't* be predicted, and thus they plan for the unexpected by purchasing, when applicable, products that are modular/upgradeable. The difference, I would say, is between those who understand the limits of their knowledge, and those who don't.
I am not sure how accurate this is. I have spent around a decade in a function as a manager for IT resources of a mid-size research department and I can assure you that concerns over modularity of upgradeability never even entered the equation. For the regular work machines, you buy what you need every couple of years — they are cheap. Even our supercomputer department never upgraded anything: they had a financial plan and wold just buy new hardware every five years or so. For them, it was more important that the systems were extensible, e.g. that they could buy a new server blade or a new storage unit to extend the existing system.
Upgradeability in a business environment is rarely meaningful:
- It puts a lot of pressure on the usually strained resources of IT departments that really have better things to do instead of tinkering with components
- You are putting yourself at a financial risk by investing into hardware outside of it's warranty or service window
- By the time you need to upgrade, computers likely became much faster anyway, so replacing the system is often the better choice
- Upgrading rarely save any meaningful amount of money anyway (if the cost of the new GPU is 60% of the computer cost, you are not saving anything, those few $$$$ are just peanuts)
- Should your demands change in an unpredictable manner, upgradeability does not help. It is never the case that you go "oh, I have misjudged how much RAM I need, I should have bought 64GB instead of 16GB". If you find yourself in such a situation, it's not just more RAM that you need. You likely need a bigger system overall.
- Equipment is cheap, labor is expensive
My impression from all this is that upgradability is mostly a thing of a home enthusiast PC builder, who likes tinkering with the components, upgrade every time a new gaming GPU comes out and has a limited budget.
When it comes to RAM, apart from the power and speed costs of socketed DIMMs, there is also the aspect of granularity. DIMMs present a 64-bit wide (total) path to DRAM, typical consumer PCs including Apples offering using two sets of DIMMs in parallel, for a (in total) 128-bit wide interface. More professionally oriented products use more channels, the widest workstation systems today being AMD:s Threadripper Pro line that can operate 8 DIMMs in parallel for 512-bits worth of DDR4-speed data, which offers a bandwidth of 200GB/s, nominal.
Going wider than that, though possible, is impractical even for dedicated workstations. It’s also worth noting the toral number of DRAM dies it allows.
Scaling up the M1 in terms of processing capabilities quickly runs into the issue of not being able to scale up the memory subsystem correspondingly, and aside from going completely proprietrary, there isn’t a do-it-all solution around. Which, I guess, is why the question keeps popping up in these threads.
Yep, that's pretty much this and that's what folks have difficulty getting. We are so used to the "traditional" modular PC industry that we tend to forget that it comes with it's own set of issues and tradeoffs. PCs are modular because it is a market consisting of many players (both manufacturers and users), and modularity is what allows this industry to throve and to compete, while giving the users the flexibility to mix and match. But modularity limits performance, and Apple build integrated systems, not mx-and-match systems for the general market.
Going from 8 to 128 GPU units is a factor of 16. No socketed memory system will offer 1100GB/s in a Mac Pro.
It's not impossible, but I don't know whether I would call a system like that user-upgradeable. This kind of bandwidth is achievable with 16 DDR5 SO-DIMM slots, all of which have to be filled with identical modules. You can't just decide to add another stick or two if you feel like it, you have to replace
everything at once. Any change in RAM configuration in a system like hat would cost thousands of dollars, and that's on top of the already extremely expensive mainboard (wiring and powering 16 RAM channels does not come for free). I mean, already something trivial like 64GB or RAM would be prohibitively expensive (how much would 16x 4GB DDR5 modules cost?)
Personally, I’m strongly in favour of speed vs. extendability.
Exactly. In the consumer market, one is used for things being modular and extensible. In the professional market, nobody really cares. HPC hardware is optimized for speed, not upgradeability. Modularity is interesting at the node level (so that you can replace failed nodes or add new nodes), not at the sub-node level. And Apple Silicon is basically supercomputer HPC design shrinked down to the user market.