Wearable Gaze Trackers: Mapping Visual Attention in 3D (original) (raw)
Related papers
Using a Visual Attention Model to Improve Gaze Tracking Systems in Interactive 3D Applications
2010
This paper introduces the use of a visual attention model to improve the accuracy of gaze tracking systems. Visual attention models simulate the selective attention part of the human visual system. For instance, in a bottom-up approach, a saliency map is defined for the image and gives an attention weight to every pixel of the image as a function of its color, edge or intensity. Our algorithm uses an uncertainty window, defined by the gaze tracker accuracy, and located around the gaze point given by the tracker. Then, using a visual attention model, it searches for the most salient points, or objects, located inside this uncertainty window, and determines a novel, and hopefully, better gaze point. This combination of a gaze tracker together with a visual attention model is considred the main contribution of the paper. We demonstrate the promising results of our method by presenting two experiments conducted in two different contexts: (1) a free exploration of a visually rich 3D virtual environment without a specific task, and (2) a video game based on gaze tracking involving a selection task. Our approach can be used to improve real-time gaze tracking systems in many interactive 3D applications such as video games or virtual reality applications. The use of a visual attention model can be adapted to any gaze tracker and the visual attention model can also be adapted to the application in which it is used.
CHI '13 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '13, 2013
Understanding and estimating human attention in different interactive scenarios is an important part of human computer interaction. With the advent of wearable eye-tracking glasses and Google glasses, monitoring of human visual attention will soon become ubiquitous. The presented work describes the precise estimation of human gaze fixations with respect to its environment, without the need of artificial landmarks in the field of view, and being capable of providing attention mapping onto 3D information. It enables full 3D recovery of the human view frustum and the gaze pointer in a previously acquired 3D model of the environment in real time. The key contribution is that our methodology enables mapping of fixations directly into an automatically computed 3d model. This innovative methodology will open new opportunities for human attention studies during interaction with its environment, bringing new potential into automated processing for human factors technologies.
Mobile three dimensional gaze tracking
Studies in Health Technology and Informatics, 2011
Mobile eyetracking is a recent method enabling research on attention during real-life behavior. With the EyeSeeCam, we have recently presented a mobile eye-tracking device, whose camera-motion device (gazecam) records movies orientated in user's direction of gaze. Here we show that the EyeSeeCam can extract a reliable vergence signal, to measure the fixation distance. We extend the system to determine not only the direction of gaze for short distances more precisely, but also the fixation point in 3 dimensions (3D). Such information is vital, if gaze-tracking shall be combined with tasks requiring 3D information in the peri-personal space, such as grasping. Hence our method substantially extends the application range for mobile gaze-tracking devices and makes a decisive step towards their routine application in standardized clinical settings.
An Integrated System for 3D Gaze Recovery and Semantic Analysis of Human Attention
This work describes a computer vision system that enables pervasive mapping and monitoring of human attention. The key contribution is that our methodology enables full 3D recovery of the gaze pointer, human view frustum and associated human centered measurements directly into an automatically computed 3D model in real-time. We apply RGB-D SLAM and descriptor matching methodologies for the 3D modeling, localization and fully automated annotation of ROIs (regions of interest) within the acquired 3D model. This innovative methodology will open new avenues for attention studies in real world environments, bringing new potential into automated processing for human factors technologies.
3D Attentional Maps: Aggregated Gaze Visualizations In Three-Dimensional Virtual Environments
Proceedings of the International …, 2010
Gaze visualizations hold the potential to facilitate usability studies of interactive systems. However, visual gaze analysis in three-dimensional virtual environments still lacks methods and techniques for aggregating attentional representations. We propose three novel gaze visualizations for the application in such environments: projected, object-based, and surface-based attentional maps. These techniques provide an overview of how visual attention is distributed across a scene, among different models, and across a model's surface. Two user studies conducted among eye tracking and visualization experts approve the high value of these techniques for the fast evaluation of eye tracking studies in virtual environments.
Proceedings of the Symposium on Eye Tracking Research and Applications - ETRA '12, 2012
We implemented a system, called the VICON-EyeTracking Visualizer, that combines mobile eye tracking data with motion capture data to calculate and visualize the 3D gaze vector within the motion capture co-ordinate system. To ensure that both devices were temporally synchronized we used previously developed software by us. By placing reflective markers on objects in the scene, their positions are known and by spatially synchronizing both the eye tracker and the motion capture system allows us to automatically compute how many times and where fixations occur, thus overcoming the time consuming and error-prone disadvantages of the traditional manual annotation process. We evaluated our approach by comparing its outcome for a simple looking task and a more complex grasping task against the average results produced by the manual annotation process. Preliminary data reveals that the program only differed from the average manual annotation results by approximately 3 percent in the looking task with regard to the number of fixations and cumulative fixation duration on each point in the scene. In case of the more complex grasping task the results depend on the object size: for larger objects there was good agreement (less than 16 percent (or 950ms)), but this degraded for smaller objects, where there are more saccades towards object boundaries. The advantages of our approach are easy user calibration, the ability to have unrestricted body movements (due to the mobile eye-tracking system), and that it can be used with any wearable eye tracker and marker based motion tracking system. Extending existing approaches, our system is also able to monitor fixations on moving objects. The automatic analysis of gaze and movement data in complex 3D scenes can be applied to a variety of research domains, i.e., Human Computer Interaction, Virtual Reality or grasping and gesture research.
Visual attention using 2D & 3D displays
2015
In the past three decades, robotists and computer vision scientists, inspired by psychological and neurophysiological studies, have developed many computational models of attentions (CMAs) that mimic the behaviour of the human visual system in order to predict where humans will focus their attention. Most of CMA research has been focussing on the visual perception of images and videos displayed on 2D screens. There has recently, however, been a surge in devices that can display media in 3D and CMAs in this domain are becoming increasingly important. Research in this context is minimal, however. This thesis attempts to alleviate this problem. We explore the Graph-Based Visual Saliency algorithm [68] and extend it into 3D by developing a new depth incorporation method. We also propose a new online eye tracker calibration procedure that is more accurate and faster than standard processes and is also able to give confidence values associated with each eye position reading. Eye tracking ...
Acquiring a Human’s Attention by Analyzing the Gaze History
2016
Our objective is to realize wearable interfaces for expanding our information activities in daily life. To implement these interfaces, we employ vision-based systems, including a variety of camera, a head-mounted display and other visual devices. In this paper, we present two wearable vision interfaces; 1) 2D gaze-region extraction from an observed image and 2) 3D gaze-position estimation. In both of these interfaces, the eye-mark recorder is employed to obtain the history of human’s gazing directions. The information of human’s gaze is useful for analyzing human’s attention. Experimental results have demonstrated the effectiveness of the proposed interfaces. 1
Shop-i: Gaze based Interaction in the Physical World for In-Store Social Shopping Experience
Gaze-based interaction has several benefits: naturalism, remote controllability, and easy accessibility. However, it has been mostly used for screen-based interaction with static information. In this paper, we propose a concept of gaze-based interaction that augments the physical world with social information. We demonstrate this interaction in a shopping scenario. In-store shopping is a setting where social information can augment the physical environment to better support a user's purchase decision. Based on the user's ocular point, we project the following information on the product and its surrounding surface: collective in-store gazes and purchase data, product comparison information, animation expressing ingredient of product, and online social comments. This paper presents the design of the system, the results and discussion of an informal user study, and future work.