Perhaps, but an in-depth treatise on my work environment isn't really on topic for this thread, and
@Maximara's combative attitude hasn't really led me to think that he has any actual curiosity or interest beyond just shouting down my experiences. I didn't see much point to spending the effort explaining my situation for someone who clearly doesn't actually care.
Of course I'm quite familiar with multi-arch Docker images. I actually do make use of them routinely. But the reality is that you don't have to stray very far in a Docker workflow to run into performance and compatibility challenges, even when dealing with an "interpreted" language like Python. We've had issues with Python 3 code resisting cross-platform operation with Tensorflow libraries (which are C accelerated) as well as intractable performance issues that make it unreliable to have developers building and testing on one platform for eventual deployment on another platform. My organization is using, producing, and deploying docker images written in Golang, Python, C++, and Haskell. Probably a few other languages I'm forgetting. All being deployed to an x64 Kubernetes infrastructure.
Often in production our developers have a need to download the exact containers that are running in the cluster to pull locally for dtrace and debugging runs, to nail down peculiar and difficult to diagnose bugs.
Even if 95% of things can be made to work predictably and reliably in a multi-architecture environment, the impact of that last 5% is enough to push an organization to consolidate on a single architecture. It's cheaper with much less risk.
Suggestions that we just re-write everything in Swift are not really tethered to reality.