Since the 'unified memory' is solely used by GPU cores, this creates gigantic VRAM for machine learning, 3D modelling and etc. Won't be paralleled in AMD/Nvidia dGPU anytime soon.
Why not? I would say it's FAR more likely that AMD & Nvidia can just quadruple, or octuple their VRAM, than it is for Apple to produce a GPU that is faster, more powerful, and price-competitive at GPU tasks than Nvidia or AMD.
Also, putting that much data onto the GPU in the first place before it can be worked on - is that going to be more efficient than just streaming it from the system ram via direct memory access, especially if you end up needing more than whatever arbitrary limit Apple has?
We've seen this before - Apple has a cultural problem of choosing the wrong tech path, or rather choosing a tech path because it's the one they have to sell, not because it's the best solution for the task, and then wasting years pounding the square peg into the round hole. If you look at tablets, for example, does the modern iPad more closely resemble the original iPad (a big-screen iPhone), or the original Surface (a tablet computer with stylus, windowing, mouse, keyboard, external display support etc).
Just imagine an Apple discrete GPU with two 'M2 Extreme' plus 256GB VRAM. I bet anyone who is bragging their Intel Mac Pro right now will feel small by then.
I imagine it would suck as badly at graphics tasks and price / performance, compared to whatever Nvidia and AMD are offering at the time, as the current Apple Silicon offerings do. The only advantage it will have, is that Apple can just refuse to provide support for new AMD cards, the same way they refused post 10-series Nvidia cards, and then they can structure their cooked benchmarks however they like.