I am building a working model of the FTIR multi-touch interface. The screen itself is extremely simple to make, but I don't imagine the controller software is.
It would have to:
1) via an IR camera, detect blobs.
2) decide that, from one frame to the next, its the same blob. (i.e. if one were to drag her finger across the screen)
3) recognize blob position in relation to the screen.
4) use this data to make an accurate human interface device, such as controlling a mouse pointer, or better yet, recognize gestures such as tapping (like a trackpad), dragging, pinching, expanding, etc. Such would be excellent for controlling Aperture, Final Cut, or even safari.
So, I have no programing knowledge (yet, Im waiting for my Objective-C book) but am curious as to the difficulty of such a program, algorithms and all...
It would have to:
1) via an IR camera, detect blobs.
2) decide that, from one frame to the next, its the same blob. (i.e. if one were to drag her finger across the screen)
3) recognize blob position in relation to the screen.
4) use this data to make an accurate human interface device, such as controlling a mouse pointer, or better yet, recognize gestures such as tapping (like a trackpad), dragging, pinching, expanding, etc. Such would be excellent for controlling Aperture, Final Cut, or even safari.
So, I have no programing knowledge (yet, Im waiting for my Objective-C book) but am curious as to the difficulty of such a program, algorithms and all...