That's not how any of this works. GPU cores don't directly interact with displays. They're just computation engines.
OTOH there may be no direct correspondence between GPU cores and the number of displays
but more displays mean more pixels to render to more frame buffers. Even running an extra 4k screen adds a lot of extra pixels, and many people run 4k screens in scaled mode (so everything is rendered internally at 5k and then downscaled by the GPU) - so if you want to run multiple high-res screens smoothy, and maybe render 3D or high-def video on them - you should maybe looking at a SoC with a larger GPU anyway.
I'm not necessarily suggesting that Apple disabled existing extra display support to protect us from laggy displays but - as several people pointed out - this decision was probably made when the M1/M2 was on the drawing board, and how a GPU would perform with that many displays could/should have been a factor in that decision.
If you just need lots of mainly-static text displays, there's always Displaylink etc.
Maybe Apple could have placated the needs of many complainers by allowing it to support either two external monitors or the laptop screen + one external monitor. Disclaimer: I have zero understanding of the technicalities to even know if this would be possible, but two displays is two displays in my simplistic brain.
I don't think you need to get too bogged down in technicalities to guess that "more features" = "more transistors, more space, more power consumption, more heat". There's a saying that "a good designer knows when to reject good ideas" - the M1/M2 design process probably involved 101 decisions along the lines of 'is this feature worth adding
x more transistors and extra connections between modules".
The underlying problem here is that the M1/M2 "overperforms" for its primary target market of passively cooled ultraportables and tablets, but doesn't
really have the connectivity or RAM capacity for some of the more demanding workflows that the raw processing grunt enables.