Intel, AMD, NVIDIA showed no drive to lower power consumption and expected others to solve their inefficiencies with bulky and noisy cooling solution. On a Mac Pro, that can be solved but on all other computers, it is/was a problem. No wonder Apple switched architecture.
This gives Apple more credit than it deserves, and unfairly discredits the other players in the industry.
Let's say the first M1 GPU. It's released in 2020 and manufactured on TSMC 5nm. Its performance is equivalent to AMD Radeon RX 560. The later was released in 2017 and manufactured on GloFo 14nm. GloFo 14nm process is about the equivalent of TSMC 16/12nm around similar timeframe. If you re-manufacture Radeon RX 560 on TSMC 5nm, you probably get similar power efficiency as the M1 GPU.
Now look at the M1 Ultra GPU. Also manufactured on TSMC 5nm. Performance is equivalent to AMD Radeon Pro Vega VII (TSMC 7nm). How their max power consumptions fair? According to anandtech's test [0] on M1 Max GPU, under the most GPU intensive task (Aztec High Off) they could push, M1 Max GPU reports about 46W. Let's assume for moment M1 Ultra GPU would report
90W under the same test ('cos I can't find anyone done the same test). According to this fella [1], maximum GPU power of "Radeon Pro VII" is "way below 200W" and let's just put it at 150W. Now if you re-manufacture Radeon Pro Vega VII on TSMC 5nm that brings 30% power saving at the same performance level, max GPU power trimmed down to 105W (!!). Comparable to M1 Ultra GPU.
So Apple GPUs seem to me not more power efficient than PC GPUs if vendors decide to stay with Apple performance level AND have access to Apple's manufacturing process node.
CPU and GPU microarchitectures are quite different species. Apple also doesn't seem to have much edge in its GPU microarchitecture. And unlike CPU microarchitecture, vendors could change their GPU microarchitecture and ISAs to suit their design goals.
[0] anandtech's
test on M1 Max GPU
[1] one fella's
test on a "Radeon Pro Vega VII"