This week we are digging a little deeper into usability testing with eye tracking. Eye tracking is basically revolutionizing how we understand the interaction people have with basically everything. A big player in the space is Tobii, which has eye tracking setups for web applications, mobile devices, and even glasses for all the scenarios where you can't have someone sitting stationary at a screen. While it is simply amazing that we can see exactly what a person is looking at within an interface, its also creating some very interesting thoughts on how that eye tracking is interpreted.
Unfortunately understanding eye tracking results isn't as cut and dry as, "they looked at it, good" or "they didn't look at it, bad". Context plays a huge factor. As was pointed out in the reading. Looking at something too long might be bad if it was confusing or distracting them from the task at hand. Conversely, not looking at something might mean they already understand it, so they don't need to spend/waste any extra time looking at it, which is good. Basically what it comes down to is that eye-tracking is really cool, but you have to take the results with a grain of salt, because like statistics, the numbers can be misleading if you don't understand the full context.