I’m confident they can make a great 50W GPU, it’s more the 250W-350W range that I’m less confident in. They need to hit over 30, probably around 40 TFLOPS of shading horsepower in order to be competitive with the high end. If they can’t hit it, then it’d be best to use AMD.
To match something like a MI100, a 60-core Apple GPU would do. Then again, the question is whether they would want or need to build that kind of a GPU. Apple is not in a server business and they are not in a supercomputer business. I think we need to look less at specs and more that what Apple products are supposed to do. For example, there is a lot of focus on machine learning with modern GPUs, with dedicated matrix multiplication units across different precision ranges, but Apple already has this with the Neural Engine (actually, this bit is weird since Apple currently has matrix multiplication acceleration on CPU, GPU and Neural Engine). And since they use unified memory, they can utilize all different processors simultaneously without any latency or performance loss.
For example, will Apple ever introduce double-precision support on their GPUs? I don't think so. This is not an area where their products are used, so why would they waste die space on it? For heavy-duty scientific computation, you probably want a supercomputer anyway. And for some occasional double-precision work, you can either use the CPU (still hoping for that SVE support) or emulate extended precision using FP32 on the GPU.
But for more traditional Mac tasks like video rendering etc., a small Apple GPU is likely to outperforms a large (and much faster on paper) Nvidia or AMD GPU. Unified memory and software integration trump pure performance in this domain.