...irrelevant since the interface in not finalized for its release and will be refined for practical uses prior to the time that Microsoft will release it.
What evidence is there that the interface isn't finalized?
Certainly it would add stress on the arms but painters and drawers are used to it and it can give the body good exercise and plus the tilt exercises different muscles depending on its angle. It's about time computing became better for our physical health.
Reaching far is more healthy than lazily moving your fingertips slightly to type.
Traditional artists aren't physically painting/drawing 8+ hours a day, 5/6 days a week as part of their career. Graphic designers and computer artists use graphic tablets which lay flat on their desks to reduce the stress on their elbows, wrists. For most, computing is about being productive and part of a career. I think even Richard Simmons would prefer being a couch potato than sitting in front of his computer doing multi-touch arm exercises for 8+ hours a day.
This is like the step from the command line to the GUI. The GUI was built on top of the command line as an extra thing but even though it was extra it was better because it gave the users an easier experience and it also gave them more options--the multi-touch surface computers will do the same thing. From the GUI to the multi-touch will be just as great as from command line to GUI!
I like your optimism, but the fact remains that right now this technology adds cost more than increased performance/usability, especially at a large scale. Despite you arguments, I still haven't been persuaded that a large scale implementation overlay on the monitor won't create ergonomic problems.
We haven't even touched much on the fact that using our big clumsy hands to manipulate things gets in the way of our eyes being able to see the screen (especially in the sectors we are manipulating). Advances voice command and eye-cursor movement technologies probably show more potential than on-screen multi-touch for this reason alone.
No!--Having properly designed windows with large enough icons for multi-touch does not make scaling useless--even if it did multi-touch would still be better because of its many other superior usages.
A GUI that's properly designed size-wise for multi-touch with my fingers means it's larger than it need to be when I use my other input devices and therefore wasting screen real estate. Now if it can proximity detect my fingers and automatically resize the interface (kinda like dock magnification), then that's great.
Some tasks are easier with the mouse, while others are easier with the keyboard but most will be easier with the multi-touch screen. Try precisely selecting an area of the screen with the keyboard--I guarantee you that is is easier to use a mouse--it is even easier to use a mouse to select an area of the screen than it is to select it with your finger on a multi-touch screen. Therefore the mouse will still be used for the specific things that it is best for. A physical keyboard is easier to use than an onscreen one since the key positions can be felt--therefore regular keyboards will still be used too.
You do a great job of defending the status quo of the mouse and keyboard here, yet say most things will be easier with a multi-touch screen. Evidence? In what ways would it be more practical or cost effective than, say, advance voice command technologies.
Boy you really like to try to twist reality because there is no contradiction that you pointed out in the quotes of mine that you responded to.
I guess I better just post the whole portion of the argument here and let people decide for themselves:
Now that said...I think there is room for multi-touch in the interface, but I believe we will see trackpads with multi-touch capability and maybe even a mouse with a multi-touch surface before we see direct object manipulation via multi-touch using large surface areas such as the monitor.
From which you excerpted this portion to create your Straw Man:
Now that said...I think there is room for multi-touch in the interface, but I believe we will see trackpads with multi-touch capability
I have a multi-touch trackpad on my iBook G4--Apple is ahead of you on this one or else you are behind in technology.
Wherefore, I called you on the Straw Man:
Two-finger scroll doesn't cover the full range of "gestures" that multi-touch can perform does it? I'd love Apple to offer multitouch surfaces...I just don't see them as a monitor replacement (for example a 10-key or keyboard sized surface that could be used as a mixing board, a color palette, etc, etc. would work...I just don't see it replacing the cursor metaphor on the monitor).
Then you ignored my call on your Straw Man and argued only against this portion of my previous statement:
I just don't see them as a monitor replacement (for example a 10-key or keyboard sized surface that could be used as a mixing board, a color palette, etc, etc. would work...I just don't see it replacing the cursor metaphor on the monitor).
It's time to move on--let's not be old fashioned it's time to use a flashlight instead of a candle. Imagine things better--don't imagine things in the context of the way that they are today--that's too limiting and holds us down when we're ready to fly.
That question proves that you are fully a Luddite when it comes to new technology. It is obvious that this specific new technology is the next generation of standard computers--questioning "...how exactly is a multi-touch Surface wannabe or a multi-touch monitor better on a large scale?" is the first thing that a Luddite would say about a new technology like this one so I don't know why you try to seem so confident when you are completely wrong.
Then prove it's better as a large scale device rather than ducking the question and lacing your rhetoric with personal attacks. I've already stated I'm a proponent of small-scale multi-touch interfaces...you've stated that you believe the mouse and keyboard aren't going anywhere and are better for precision work on the screen. Now lets make the leap to the point where you show evidence to convince me that large multi-touch screens are better than small-scale multi-touch interfaces (which I believe are better for both ergonomic and cost reasons) for everyday use.
The multi-touch trackpad already exists and has its limitations. A larger surface gives the users many additional benefits.
Not really...especially if you are using an advanced multi-touch "gesture" interface in parallel with another device as the cursor control. Imagine using your mouse to select an object then using the multi-touch surface to rotate it, while pressing the shift key to constrain the value of rotation. If the multi-touch surface is built into the keyboard, it can be much more productive than a large scale device. And because such a device will be customizable, I don't rule out the ability to have it serve as a mirror of the monitor for things such as photo sorting and the like.
Complaining about the ergonomics of a table is unquestionably out of place since people use tables for paperwork and other things yet their bodies function just fine and they will once surface computers become the standard.
I just don't agree...a computer screen has a tendency to draw a user in and hold them "captive" much more than a piece of paper does. Blinking rates decrease when viewing a computer monitor...if I bring my monitor within arms reach, eye strain will increase or I'll have to take more breaks and I've only got a 20" monitor...If I get a dual 30" multi-touch setup, I might as well just pluck out my eyes and throw them into a skillet.
If it is available for customers at the end of 2007 then it will probably be for sale at certain retail stores.
That seems like some wishful thinking more than anything else.
Some people buy desktop computers and televisions for $5,000 to $10,000--I would be less likely to buy one of those for that price when compared to a surface computer for that price. I might buy a Microsoft surface computer if Apple doesn't release one of their own soon.
Sure...people spend that much on desktop computers because of the power they provide. Choosing a Surface over a Power Mac when you don't even know the processing power yet is sorta like picking a candy bar by the color of its wrapper rather than the ingredients.
Wow--it sounds like you would purposefully choose the wrong option. That's like saying that you drag a folder that at the time is full of important documents to the trash and press empty trash.
I didn't indicate that the "grab" function was destructive to the data on the device did I? Nor did I state that "All from All Available" was the only option, obviously you'd be able the "grab" from individual devices in proximity. I think the close proximity/contact needed by the Surface is just too limiting, people want access to their devices from where they are...having to haul them to their computers just takes away from being able to do what they want with their data. You should be able to keep your camera, iPhone and/or iPod near your keys and you wallet downstairs and still communicate with them from the computer in your office upstairs. Having to set something down on a Surface to communicate with it is just as ridiculous as having to connect it using a USB cable. Ultimately, we'll arrive at a point where the minute you pull into your garage coming from a photo shoot, your camera be able to detect that it's home and automatically downloaded all it's photos to your computer by the time you've gotten in the door. Devices will have be programable will proximity relative workflows that control their behavior when in proximity to other devices...a natural evolution of iSync.
Using physical objects that are detachable from the screen is a great way to use a surface computer. If the objects were part of the screen and the surface computer then it would give users less options since the screen wouldn't recognize external objects.
But it's limiting at the same time...take your chess example, right now you can view the board at many angles conveniently. Once you add in physical chess pieces, that functionality is lost (unless the surface somehow moves the physical pieces). Granted, the physical pieces will take away some of the desire to manipulate the board angle. In public places, such as bars and restaurants where the Surface will be debuted...you add the problem of theft of the physical objects.
The other proble is how smart the Surface actually is...one of the demos with the painting app, the guy used a actual paintbrush to paint on the Surface...did the stroke appear as a brush stroke, nope...just a slightly wider finger stroke, no characteristics of a physical brush whatsoever.
That is one type of use--but a large screen is the best use yet and needs to see the most development with this technology...I totally agree with you on everything it this quote of yours. I'm glad we agree with something.
I think we actually aren't that far apart on the large issues...it's just the details. I think small-scale input devices just make more sense because of their lower cost and the way that they will more easily fit into people's existing computing experience. I think the ergonomic issues haven't been resolved with a large scale device, so I don't think they'll work on a personal/business computing level...but in public areas I think they are the way to go. In the end, I think we both want this technology to not only wisely developed, but to become well adopted by the public.
Swapping hardware for those purposes would be a waste of time and energy and would also wear out the computer parts pretty fast.
Not if we are talking about something such as a thin silicon overlay like an iSkin keyboard cover just to let you better feel where the keys are that can be easily removed when you're not typing. The jog-shuttle would have to be more complex than that...but someone needing such a device isn't likely to swap it out as often.
Swapping out tools on the fly would be time consuming and harder. Using the whole screen would give you access to more tools and options all at once since you can place tools that will work for you onto the screen. The "more useful" option is a larger screen that provides you with more possible (and more practical) uses.
Only if you are talking about swapping out physical items...not if each multi-touch configuration is such the touch of a multi-touch button away. Plus, since you could have toolsets on the multi-touch device...you could hide palettes on the monitor, you'd gain screen real estate that way (because you are basically gaining a 5"x17" multi-touch surface for your toolsets). This becomes even more efficient when you add in the ability to have different loads of multi-touch toolset screens load with each application.
Add in cursor independence and the ability to add multiple multi-touch input devices at only a portion of the cost of a monitor and you are rapidly increasing your potential productivity. Select with your mouse/stylus, tap a color from your palette, tap a font definition on another palette.