Yeah, but that didn't happen overnight.
No, but the technology is there today. The question is simply how far Apple wants to go with it. There is no principal reason at all why they wouldn't be able to make a GPU with x cores, the only limitation is their business plan.
If quadrupling the performance is indeed what happens (and the leaks are true) when quadrupling the number of 'cores', then that would give it... *drumroll please*, mid-tier desktop GPU performance.
Leaks sound realistic and yes, GPUs scale almost linear with the number of processing units. It's easy to see if you compare performance and core count of desktop GPUs (while normalizing for clocks of course).
Which would be ASTONISHING, and damn near physics breaking like only Apple can do.
Just a direct consequence of using smart technology instead of relying on ever increasing power consumption like some others do. See next section. Process advantage helps too though.
One bit of confusion, which is something that annoys me so much about modern GPU manufacturers, is that they use the exact same marketing names for products that aren't at all equivalent. For example, a mobile RTX 3060 has half the ram and is about 50% slower than a desktop RTX 3060. But both will simply be called an "RTX 3060". They're not the same GPU, they're not the same anything. They're two completely different products with the same name.
Because that's just how Nvidia does business these days. They overclock the hell out of the GPU and the VRAM, increase the power consumption by 50% from the last generation and declare "look, we now have a 50% faster GPU!". This power consumption inflation has to stop. It is absolutely ridiculous that 200watts for a GPU is considered "mid-range" today, just a few years ago that was reserved for biggest and hottest GPUs out there. What is Nvidia going to do next, release a 500W GPU and call it the new "mid-range"? Intel is doing it too btw, they just raised their mobile CPU TDP to 65W (!!). Couple of years ago that was considered desktop. This is lazy, creates unreasonable expectations and is definitely not how innovation is supposed to work.
And that btw is one reason why I am so exited about Apple's new hardware. Because they are currently the only company that gets it. Because they haven't lost the sense of scale and are not joining "performance at any cost and let the world burn" bandwagon.
There's no doubt in my mind the M1 will absolutely obliterate integrated GPU's (it already does), and continue to trade blows with the fastest dedicated laptop GPU's. But what I really want to see is at least mid-tier desktop GPU performance. The kind of performance that starts enabling real high end workloads and decent gaming. I'm not sure if we'll ever see that from M1 though;
As I wrote before, it solely depends on what kind of product Apple wants to offer. Thy currently own the most energy-efficient GPU IP out there and if they wanted, they probably could sell dGPUs that outperform an RTX 3090 at half the power consumption. But I doubt that they are interested in doing this.
They will likely target products that are "good enough", and ship them in form factors that would be impossible with the usual hardware. Yes, I don't expect Apple prosumer GPUs to be faster than RTX 3060 variants... but they would ship in thin and light laptops with incredible battery life and all other bells and whistles.
I wouldn't be surprised if future Mx chips are still paired with GPU's from nVidia on the very high end.
Why would Apple use less efficient GPU IP, and why would the sabotage the GPU programming ecosystem that they took so much care to build up? Apple's vision is true heterogeneous computing in a personal device, something that has so far been reserved to supercomputers. A third-party dGPU is death sentence for that vision.