To put it simply: one of the primary advantages of Apple Silicon is that it offers a streamlined GPU programming model with predictable performance across the entire ecosystem. Two things are especially important here. First, it significantly cuts down the development and testing time, since you don't have to take care of all the hardware differences and driver bugs you encounter — everything from the TV to the Mac Pro runs the same hardware and the same driver, just different "quantities" of it. Second, Apple GPUs have unique features that no mainstream GPU can offer: unified memory¹, TBDR, shared memory across shader invocations etc. If you as developer take advantage of these features, you can build software that performs very well with high efficiency and flexibility. For example, high-performance unified memory of Apple Silicon has amazing potential for professional applications since it allows the CPU, GPU, ML accelerators and any other specialized processors to work on the same data in tandem, something that is impossible with a traditional PC where the GPU is connected to the rest of the system via a slow data bus. Basically, Apple Silicon is a huge step for GPUs on Macs, not only in terms of performance, but — what I personally consider to be much more important — in terms of stability, predictability and ease of development. I like to compare Apple Silicon Macs to consoles in this regard — you know what hardware you run and Apple gives you fairly decent amount of control over what that hardware can do, something you don't really have when working with a Windows or Linux machine.
Now, the problem with eGPUs is that they completely break the Apple Silicon GPU model. Most significantly, they break unified memory. Then, if you want an eGPU, you are probably talking about an AMD or Nvidia one (because let's be honest, why would Apple make an eGPU — it's a very niche market that won't make any sense for them). These GPUs don't support same things that Apple GPUs support, and they don't come with the same programming model, performance guarantees or the streamlined set of features. Developers now need to check what type of GPU is connected, what features it supports and what is the best way to program for it — increasing the development time, making testing more complicated and significantly increasing the chance of bugs (to be fair, this is how GPU programming currently works on Intel Macs or in fact on any PC, but that's also the aspect that Apple Silicon makes that much better. All for the <1% of users that run an eGPU... and for what purpose? Content creation software is likely to run better on the internal GPU anyway (because of unified memory) and games... well, once Apple-optimized games will appear, why would a dev want to implement a separate rendering path just for the handful of users who might own a dGPU if they can get great performance from opting into the TBDR-specific optimizations that will be guaranteed to work on any Mac?
And of course, there is a simple practical issue as well: even if Apple opens up their Metal drivers framework to the third party (which is very unlikely IMO), who is going to write the drivers? On intel Macs, eGPUs are easy — they are jut regular dGPUs connected via a regular PCI-e bus, it's just that the bus itself hangs on a longer cable. Intel Macs already need GPU drivers to run their AMD GPUs, the same driver will also work for the external GPU — with minimal OS support. But writing a driver specifically for Apple Silicon? Apple won't do it, they have no incentive whatsoever, and I doubt that AMD could justify the expense — the eGPU market is not that lucrative.