"slow GPU" would be sGPU. How does an 'i' a short from of the word 'slow'? Same for 'f' and 'd' which are different.
This is largely lots of nonsense. dGPU and iGPU are abbreviations for 'discrete GPU' and 'integrated GPU'. That some people have mapped dubious extra connotations to those two implementation approaches does
NOT mean the meanings are fuzzy. You are trying to throw the baby out with the bath water. The only thing necessarily is to throw out the poopy bath water connotations. Not the words themselves. They are more than abundantly clear if stop taking typing shortcuts and putting "definitions" out of thin air.
Integrated means just that integrated. If the GPU is in the CPU die then it is integrate. If CPU and GPU shared the exact same pool of system RAM resources then again integrated (not separate or segregated). Discrete means seperated from. So not on the die and not sharing the same resources. Historically it highly likely also means that it can be replaced also. Apple has a high tendency to use dGPUs are embedded processors ( meaning soldered and non replaceable on the logic board with the CPU). But the embedded solutions still have a seperate primary memory store resources (VRAM).
Apple is quite clear about what they mean by Unified memory also.
"... All iOS and tvOS devices have a unified memory model in which the CPU and the GPU share system memory. However, CPU and GPU access to that memory depends on the chosen storage mode for your resources. The
MTLStorageModeShared mode defines system memory accessible to both the CPU and the GPU, whereas the
MTLStorageModePrivate mode defines system memory accessible only to the GPU. ...
"
Select an appropriate storage mode for your textures and buffers on Apple GPUs.
developer.apple.com
Sharing a single system memory pool is a dual edge sword. It has upsides in perhaps cutting down on the number of data copying actions a developer needs to do. The other upside is that is "cheap" both in space costs ( only one set of RAM packages ) and materials costs (typically buying less RAM system wide.). It has downsides in that the CPU (and other processors) have to share bandwidth for the GPU. Pragmatically that puts a limit on parallism and concurrency ( too many consumers of memory bandwdith and not enough producers then will run into Amdahl's Law effects. ). If make a copy then it is not necessarily a cache coherence problem (and can get rid of that overhead also) .
the load versus compute time ratio has input as to which side of the dual edges mostly end up on. If the load time is short and compute time high then making a copy probably is going to be pay quite well on embarrassingly parallel workloads. If the load time is high ( or both the CPU and GPU have to claim lots of cache coherence 'locks'/exchanges ) and the compute time is relatively shorter then not.
The folks who have succumbed to the "sweeping generalization" cognitive bias (e.g., "I once used a Intel iGPU at some point so all iGPUs have to be slow") are highly likely to do the exact same thing again if you change the name. Switch to uGPU ( unified memory GPU) and nuGPU ( non unified memory GPU) as so as they are exposed to gaps between the two then the same thing will re-occur. Changing names doesn't address the root cause issue so it is extremely likely not going to go away. You might get a temporary reprieve while the number of sweeping inferences take a while to reform, but they are still going to be sweeping generalizations.
[automerge]1601236038[/automerge]