It seems we're having a semantics debate here...
There are plenty of system components and controllers present on an Apple SoC that are simply not present on an Intel processor.
Even if Intel's marketing team wants to call their Haswell and newer chips SoC's, they are not by any common definition.
Anyway, "unified memory architecture" is a broad term as you're using it here. It's like saying that a Hyundai Elantra and a Tesla Model X are both cars.
Your original post is looking at the Apple Silicon GPU's use of shared memory as though its implementation of UMA is the same as that of an Intel integrated graphics processor. It's not. It's probably more similar to that of an AMD APU than an Intel CPU, but even that's not a like-for-like comparison to be making here.
That's a bit of an oversimplification. Yes, Intel's IGPs sucked, on the whole. You'd be fooling yourself if you said that AMD's were much better. They weren't. Yes, you're dealing with a weaker GPU (that has to share the die with the CPU). The NVIDIA IGPs that were used in Macs from 2008-2011 drastically improved things over the Intel GMA X3100 of that era, but they still paled in comparison to any discrete GPU. It's not that Apple's IGPs are better than Intel. It's that Apple's SoC system architecture allows for IGPs to share memory with the system and not sacrifice performance as a result.
Any Intel Mac with a discrete GPU will always have better graphics performance than any Intel Mac with any integrated graphics processor whether it's made by Intel, AMD, or NVIDIA. Period.
Apple's architecture, while yes, employing shared RAM between the GPU and CPU does this differently. Don't ask me how (I'm not even sure Apple has fully revealed this yet; but if they have, it'll be in a WWDC 2020 video). But it does.
This is how their GPU can be compared favorably to that of the AMD GPU in the Xbox One S.
You don't abandon the UMA by adding a discrete GPU, especially if that discrete GPU can be offloaded however many tasks from the rest of the SoC. It's no different than having a Mac that sends its complex rendering jobs to a dedicated render farm.
And yes, the "Integrated Graphics" of the Intel era is NOT the "Integrated Graphics" of the Apple Silicon era, but that's also why I originally said that you're looking at it (the whether or not Apple uses HMB2 vs. GDDR6 for VRAM) the wrong way. Comparing Apple's graphics architecture with any graphics architecture that we've seen on any Intel or PowerPC Mac up to this point is the epitome of comparing Apples to Oranges. The secret sauce in Apple's GPUs appears to be tile-based differed rendering. The following videos should give you as good of an idea as anyone can have of this until the new Macs see the light of day:
Discover how Macs with Apple silicon will deliver modern advantages using Apple's System-on-Chip (SoC) architecture. Leveraging a unified...
developer.apple.com
Meet the Tile Based Deferred Rendering (TBDR) GPU architecture for Apple silicon Macs — the heart of your Metal app or game's graphics...
developer.apple.com
Apple silicon Macs are a transformative new platform for graphics-intensive apps — and we're going to show you how to fire up the GPU to...
developer.apple.com