Using computer vision techniques, we extended eye tracking technology to allow for data normalization across dynamic environments. We applied these techniques to subjects viewing artwork at Duke’s Nasher Museum of Art.
Most eye tracking solutions track gaze with respect to a static object like a computer screen, making it easy to know exactly where a person is looking with respect to an image on the screen. This makes analysis easier, but doesn’t accurately reflect real-life. What happens when we move eye tracking into a more realistic, dynamic setting, using eye tracking glasses that allow people to move around? People can interact with objects in a much more natural manner, but a new challenge is introduced: We only have gaze data with respect to the glasses frame of reference. In order to apply conventional analysis methods to these data, we need to map dynamic gaze back onto a static reference image, compensating for distance, head movement, and perspective.
Using the OpenCV package and its efficient implementations of common computer vision algorithms, we developed a method to find objects of interest in video from eye tracking glasses and return gaze coordinates over those objects, enabling experimenters to apply conventional data analysis methods to eye tracking behavior obtained in dynamic, real-world situations.