There's a very, very good reason voice recognition for the most part hasn't, and never will, take off: It might be very fast to say "computer: open 2001 totals spreadsheet", but there's no way in hell I'm going to sit in front of my computer all day talking to it. My wife would drive me insane during the day, any wall-less office would be a cacophony of commands, and I'd keep my wife up all night mumbling to my computer. I don't think so. Heck, there are times I've used a pen and tablet as a pointing device because clicking the mouse made too much noise.
And as TBR pointed out, I don't want some fancy floating 3D os, even if it was physically possible, because I certainly don't want to sit here waving my arms around all day--programmers would all have huge, beefy arms, and being tired after a day at the office would take on a whole new meaning. Tylenol sales would skyrocket, though, and you'd have special "programmer bicep toning" courses at the gym.
Besides, if you just want to point and drag on a physical screen, you can easily do that now with a touchscreen if you want to spend the money.
A combination of eye focus and mental control (look at icon, think "click and hold", drag it with your eyes to a new spot, think "let go") might work, though I still wonder about long-term eyestrain.
If anything, the next advance for the OS would be layering, making use of two and a half dimensions to organize windows and data on the screen. This is already starting to pop up a bit in things like Expose and MS's Longhorn demos.
As for actual rumors, sorry, nothing but speculation right now.