I was mostly thinking about consumer cameras with very close-together lenses where you're (I assume) going to get a compressed sense of depth for mid distances.It's the opposite of what you're saying. You want the lenses farther apart when shooting things farther away. The only reason I'd want the lenses closer together than average human eyes would be to do close up photography of smaller items.
I guess my error is starting from the perspective of "if you want the reproduced image to look the same as what a human would perceive, then anything other than similar to a human pupillary distance would cause distortion in depth perception". I wasn't taking into account whether that's actually what someone generating 3D video wants, and the fact that not all video is shot to replicate a person just standing there looking at things at "normal human vision" distances, or with lenses that exactly replicate human-eye "zoom" level.
I'm assuming, optically speaking, if you shoot something far away and use lenses farther apart, you're going to record more of a sense of depth than a human viewing the same scene would actually get in person. I guess it makes sense you might want to do that--an enhanced version of reality (or from a more imaginative perspective, you're seeing the scene as if you were a giant).
I do get how it's a necessity for very close objects--humans can't even focus properly on very small objects, and certainly not in a useful binocular-vision way, so having lenses closer together lets you show something as if the viewer were shrunk down to a smaller size.
I assume there are also interactions with magnification I'm not fully understanding or taking into account; in the same way the depth-of-field changes hugely on a 2D camera depending on the length of the lens, not adjusting the distance between lenses accordingly at different focal lengths in 3D could generate depth perception that's wrong, weird, or at least different than you intend.