does apple need DIMMs for say an HIGH SPEED SWAP / TEMP RAM DISK?
As an absolute , hard requirement 'need'? No.
Apple does have a narrow subset of the Mac Pro user base who are clients that defacto create large RAM disks ( load in a large digital A/V library once and then avoid the storage drive for that library access for the rest of the session. ). But *DDR* RAM as a foundation for a real, old school , solid state drive is relatively very expensive. Many folks manage to get productive work done without one.
diskutil can create a RAMdisk. If that was extended so that a straight DMA copy from a default disk image was loaded into the is secondary RAM at the end of boot phase, then the accesses after it was loaded would all be fast. Or could load with not much data and use as a scratch drive that was always around after boot ( no wear issues with a RAM disk so if want to throw 2TB/day of write data at them per day , that is a great fit) . Faster than any Flash based SSD but also lots more expensive. Since Apple controller the SoC they could put a SSD controller in "front" of this RAM pool and present to the rest of the system as a super fast SSD that has a fixed use. No wear patterning to worry about or long term pressent metadata so it could be a pretty small controller
Swap also could be relatively transparently weaved into the compressed memory system.
Compressed memory is part of the Mac. Your Mac can make better use of available RAM improving performance while preventing paging memory to disk.
www.lifewire.com
So when needed a place to send compressed memory to to free up space could drop a decent amount of it off to a now primary RAM disk swap drive and then onto an Flash SSD if space ran out (dropping the wear load on the Flash SSD has big upsides for a projected very long system service life). May need to encrypt that to meet Apple's security standards going forward. [ some data is memory mapped in from a file and that should just go back to the file if an inactive and needs to be swapped out. If doing more aggressive memory compression and shuffling then a higher number of E cores should come in handy. They soak up most of the overhead. ]
But for folks who have an active working set of data that is 400-600GB large that is being actively touched by 100 CPU/GPU/NPU cores at the same time, those solutions aren't gong to work well. Similar to someone running 4-5 80GB VM images on the same machine. So it won't make everybody happy. However, Apple would leave a incrementally less number of users behind .
What is common is that have customers who want higher disk I/O and making them happy with faster disk I/O in some contexts. Where those match up it is 'win'.
Apple probably has better numbers but there is likely some threshold 256, 384 , 512 GB of RAM where they hit the 95% threshold of user active workloads. If they can get to that with soldered on RAM then they may not just bother with that last 5% of a group that is already relatively very small. 5% of 1% is approximately zero. It won't make much sense to spend substantive (expensive) additional design work for an approximately zero market share.
There is the "I want to buy RAM cheaper crowd". If they are relatively cost sensitive now then probably still going to be cost sensitive in the future. Even though RAM DIMMs in 4-5 years will probably drop in costs that SSD RAM disk is still going to substantially more expensive that a Flash SSD. More than 3-5 years out, will still be that other drive that they drift towards ? Decent chance yes.
Similar for the faster "swap drive" which initially don't use but then in the future want to pay relatively high top dollar for (versus future Flash SSD ).
As long as Apple is putting a relatively high floor on the entry level for the embedded SoC RAM most of the cost sensitive folks are going to balk at that intial buy in. They want a almost "bare-bones" box with the minimal RAM DIMMs present that Apple will allow. The Swap/RAMDisk really don't directly address that issue much at all.
Most of the time the argument is that their RAM working set footprint is going to quadruple unexpectedly in 3-4 years and the system has to adapt. ( going to jump from the 55 percentile up to the 85 percentile in workload footprint class). Apple raising the 'floor' of the minimal SoC RAM tends to weed a substantial number of those folks out. So if the min is 72GB or 144GB for a new Mac Pro most users are going to buy with decent amount of headroom to grow into.
P.S. The other useful path could be for GPU textures that only the GPU is going to touch. If there is Disk-to-GPU memory API then if the data is never going to be shared with the Unified memory pool processor cohorts then might as well put it into something private that DMA by-passes the main system cache. Textures tend to be read-only so primarily just a huge read only cache from the drive. If the GPU is decompressing the textures that also would be a decent page them back out to incase needed to grab them once again. Like the "load RAMDisk at boot" if there is a list of texture resources and an asynchronous DMA engine could be loading a prospective long term 'hot list' of textures in while not clogging up the main memory system for shared/unified workload. (presuming file system isn't doing large bulk load of something else. )
That one may not be as transparent to the SoC cores . But is illustrative that it may not be all about keeping CPU cores happy. There are several sets of cores sitting on the Unified memory pool. Each could use private pools of memory for a subset of workloads. Pull those out and you get more common shared memory to use on mainstream workloads.