Yea but maybe I should hold off for the new iMac which may have a significantly more powerful gpu and cpu than anything else out there?
Very doubtful that any Apple GPU will outclass a RTX 3080. Maybe you could find one single, obscure benchmark where the high-end "M1X" High-End-DeskTop(HEDT) or whatever is will be named will outperform a RTX 3080, but as far as flat outperform, no.
You're not going far with 64-bit wide shared memory. Even if the GPU was that powerful, it would sit idle waiting on data from RAM with only 68-70GB/s theoretical memory bandwidth. RTX 3080 is 780 GB/s. If you want high-end gaming, Windows-Intel is your only option.
If Apple opts to keep onboard video in their HEDT M-series, they have a few options, ring it with RAM chips like a GPU or something exotic like HBM2E onboard. Other options include a discrete GPU over PCIe 4 or 5 or providing 4, 6, or 8-channel with RAM slots at 32-bits each to aggregate enough bandwidth for high-end GPU functions. Going ultra-wide DDR4 will have sky-high latency compared to GDDR6 while still having relatively low bandwidth due to lower clocks, so that's not much of a solution.
I'll be shocked if Apple has developed a discrete GPU on a card. If they're still making nice with AMD, maybe they've developed AMD drivers for Apple-ARM.
Next hurdle is power. A HEDT CPU plus a high-power GPU all on one substrate(two chips on one green slab) will be very power hungry. You can't ignore physics. Powering 20-50 billion transistors of CPU+GPU even at 5nm will be a 250+ watt chip.
After that, cooling the chip with the much heat produced in a small area won't be difficult. Doing in in a way that doesn't result in jet turbine sound effects while still having a reasonable bill of materials is a challenge.
Short version, the High-End Apple chip has the real potential to be a chip-level challenge with many of the same pitfalls as Apple had with the trashcan Mac Pro, but on a microscopic level.