....
Intel CPUs and AMD graphics are terrible, yes. Intel has tried for years to make a discrete GPU, which turned into their failed Xeon Phi Coprocessors. But that should be an indicator of how hard it actually is to produce improvements in technology. ....
GPU's aren't super duper harder than a top end CPU. Intel's previous drive out into the swamp with Larrabee was more a matter of "when all you have is a hammer everything looks like a nail" mindset then some substantive difficutly with GPU design. It all took a 'left turn' and into weeds when someone decided to apply a high overlap of x86 instruction set design (and decoder work) to the GPU. That was one of the principle problems.... trying to couple it to instruction set bagged from the 80's and 90's.
Intel had made buckets of money from the inertia of gobs of software saddled with 32-bit ( and smaller) legacy myopia. The substantive problem kicks in when try to apply that inertia to a problem area outside of general purpose CPUs. It is one of those "who made gobs of money off this , so let's just pour it like ketchup on another area. " . That is digging a deeper moat around what you got not necessarily solving the problem. ( but it may be the only way to get funding is whole organizational thinking is deeply skewed and mired in group think. )
Intel's iGPUs generally had a smaller transistor budget. Initially they only got was was somewhat "left over" space. Then after while the mainstream Core i-series got "stuck" around 4 cores because iGPU got biggest shares of budgets. And the performance got better.
Where Intel is now is different than back where their iGPUs started from. It has gotten to the point where there is real contention for floorplan space between iGPU and x86 cores. So getting to the point can just hand most (or effectively all) of the budget to GPU processors and supporting logic isn't as big of a leap as it was.
Breaking into the higher end GPU business is about as big a matter of software as it is hardware. That is a big barrier to entry to most players. And another contributing reason why Intel has been widely floating the DG1 to developers ( that won't be a general retail product at all).
"big dies" that do high end computation "compute" is a good option for Intel to keep their fabs busy with work if the CPU business retracts a bit.
The software being important factor is one reason Apple is pushing hard on Metal ( big, bright, orange "detour' signs around OpenGL , OpenCL , etc. towards Metal). Their iGPU solutions will pragmatically work much better if well designed/optimized Metal code is provider to their hardware.