Making every object on the screen a "live object" would be problematic for people using multiple displays. How would you know which window your keystrokes will be going to, when you have several windows open at once? If there are no visual clues to determine which is active and which is not, the user would be more prone to accidentally typing in windows they didn't mean to, or closing or quitting apps on accident. Windows focus is also a core element in programming—it can't be done without. I suspect it also plays a major part in accessibility, for users with screen readers and so on.Regardless, a user knows which window is active. It’s the window they are interacting with in the moment. Visual cues that began in the 80s will gradually disappear and every object on the screen will become a live object.
Thanks for this. I was hoping so...There is still a visible difference in front and background window.
Making every object on the screen a "live object" would be problematic for people using multiple displays. How would you know which window your keystrokes will be going to, when you have several windows open at once?
Its a lot less prominent though, unfortunately. I find myself losing the focused window pretty often.Thanks for this. I was hoping so...
This horrible design even removes simple things like telling the differenece between active and inactive windows.
Because users have a brain. They clicked on the document when they wanted to type on it. People are aware of what they are doing you know. I can’t logically think of a scenario where someone hasn’t selected the document window and placed their text entry point before typing on it 😂
Documents and app windows are already like this in iOS and iPadOS and nobody forgets where they are typing.
This is absolute nonsense. Something tells me you don't work in an office.