subscribe

EU Looks at Gaze-Contingent Displays

The EU began the Deeepview Project in 2012. It proposes the use of gaze-tracking technology to extend the user’s perceptions – for example, directing focus and enhancing colour where the user is looking. The first application to come out of the project, Gazer, was announced in March.

Gazer was developed by SACHI, the computer interaction group at St. Andrew’s University in Scotland. It is software that works with eye tracking devices (currently limited to Tobii EyeX), enabling photographers using light field cameras* to explore images by focusing on them. Deepview coordinator Miguel Nacenta calls this “gaze-based perceptual augmentation”.

A Gazer image taken with the Lytro Illum, with different focal pointsIn the past, light field images have been refocused using a cursor to refocus the image. However, a gaze-contingent display (GCD) does it automatically to create a sensation of depth. This works by modifying the information gathered from the eye-tracker about the user’s gaze: not just its location, but metrics such as blinks, fixations and saccades.

Such systems have been used before, but mainly for performance gains by selectively omitting details in parts of the display that are not being focused on: we have previously written in depth about such foveated rendering systems, which are of great use in virtual reality. However, Deepview’s goal is to find perceptual modifications that enhance the displayed information, and create a more immersive experience.

The Deepview project will end this May. Gazer is available as open source software now, through Github.

*A light-field camera, like the Lytro Illum, captures light rays from multiple vantage points. The main advantage of this is that the resulting image can be refocused after it is shot.