So I was wondering if it's even remotely possible to try to track where the user is looking on-screen based on the video from their built-in iSight. I was thinking you could have a calibration mode where the user would look at flashing dots in the corners, mid-points of the sides, center and various other locations. The images snapped could then serve as the benchmarks.
During the tracking phase, the image from the camera would be constantly compared (using color vectors maybe? any other suggestions?) to determine which benchmark images it most closely matches. Obviously this strategy isn't going to be nearly as accurate as the more sophisticated methods currently in use, but would it be so frustratingly inaccurate that it's not even worth the time to think about?
I realize "accuracy" can mean a lot of different things, but I was thinking of a 200-300pt radius circle of tracking area with around 60-75% accuracy.
During the tracking phase, the image from the camera would be constantly compared (using color vectors maybe? any other suggestions?) to determine which benchmark images it most closely matches. Obviously this strategy isn't going to be nearly as accurate as the more sophisticated methods currently in use, but would it be so frustratingly inaccurate that it's not even worth the time to think about?
I realize "accuracy" can mean a lot of different things, but I was thinking of a 200-300pt radius circle of tracking area with around 60-75% accuracy.