I don't think our opinion differ that much. And I fully agree with you that these terms can be sometimes useful in a technical discussion, as long as all the interlocutors understand the nuances. But also I think that these notions are potentially dangerous in a casual non-technical discussion for wider audiences (taking
@MacInMotion's post for example), because they obfuscate the reality. Instead of discussing what is actually going on (which might be interesting and educational for a hobbyist curious about these things), labels perpetuate unhealthy myths and overzealous generalizations (like "RISC is low-power, CISC is high-power" or "integrated is slow, dedicated is fast"). Labels are easy, and they tend to get repeated a lot, which makes them seem "right". And the end effect is that people stop at labels and don't bother learning the actual interesting effect hiding behind it.
First, let me point out that in we are in agreement about all of the conclusions of my post except possibly the impact on memory usage caused by the lack of the GPU having dedicated VRAM.
My main point was that several people were making posts which seemed to be saying the question of whether an M* Mac uses more or less memory than an Intel Mac was a ridiculous question, and my post was an effort to explain why it was a legitimate question, regardless of the answer.
For all of that, it seems to have generated a lot of unnecessary pushback. Please let us all chill a bit.
Regarding RISC vs CISC, as I said, in agreement with
@leman, the distinctions have significantly lessened over time. To me, the primary distinction is that RISC chips operate primarily (if not necessarily exclusively) one instruction per clock cycle and do not use microcode, whereas CISC chips use microcode and multi-clock instructions a lot. The poster child for this is the Integer Divide instruction, which ARM did not have until v7, by which point I will agree any clear cut distinction between RISC vs CISC was lost.
At the same time, I want to point out that my only reference to Intel as CISC and M*/ARM as RISC was to point out what those terms meant historically and that both historically and currently, the same source code compiles to machine language code of different sizes, and historically it was by a very large amount. The only other thing I said was that in current practice, M* binaries tend to be smaller. I did not perpetuate any myths (healthy or not) or make any generalizations other than that, over the years, "RISC programs got shorter and CISC programs got longer". I don't think that is overzealous.
Regarding GPUs and VRAM, I stand corrected that the Intel Integrated Graphics hardware uses system memory. I was under the general impression that all Macs had some kind of dedicated graphics card with dedicated VRAM, and thought that at the low end, say 8 GiB of system memory in the Intel Macs, that even 2 GiB of VRAM moved to unified RAM would be noteworthy. So let's just say you need to check your current system's video hardware to make a better prediction of the impact on memory of moving to M*.
Regarding how VRAM is used, to the best of my knowledge, where a graphics card with VRAM is used and graphics "acceleration" is not disabled, the data in the VRAM is only mirrored in system RAM when the VRAM is full and buffers/pages need to be swapped out. In the general case (excluding some special cases such as photos and videos), drivers send graphics commands to the GPU and the GPU expands those commands into pixel buffers. For example, font definitions are uploaded to the GPU (in the form of Bézier curves) and then text is sent to the GPU as characters (1-4 bytes each) and the GPU renders the text in the font at the desired size, which is many more bytes than the text itself. Every open window is backed by such a buffer in VRAM, usually even multiple buffers (such as one for each embedded image in a web page, one for the scroll bar, one for the window frame, etc.). Window buffers are built by overlaying these buffers on top of each other, and monitor images (desktops) are built by overlaying window buffers on top of the desktop buffer. So one desktop can take up a lot more video memory than 3*pixel count.
On top of that, a single window may be rendered in multiple resolutions, e.g. one for the built in retina display, one for an external Full HD display, and one for an external 4K display, so that the windows can be dragged from display to display, or split across displays. Multiple external displays can mean multiple resolutions and multiple color profiles to render, and probably mean more open windows, too. Little to none of what is in VRAM should be duplicated in system RAM. I am 100% confident that the system never stores code in VRAM, and only uses VRAM for data for very special cases such as heavy-duty math (like Bitcoin mining) where the GPU can perform the required calculations much faster than the CPU (and in parallel).
I only have usage information for myself, and expect I am on the high end of usage, but offer my data for whatever it is worth. My Intel Mac has 8 GiB of VRAM. It nearly always is using at least half of that, and it routinely gets to 98% full, at which point I have to assume it starts swapping VRAM out to system RAM, because the computer experiences frequent "freezes" of a few moments that I cannot attribute to anything else. So when I'm looking at switching to an M* Mac, I'm mentally reserving 8+ GiB of unified RAM for graphics.