Overt visual attention on rendered 3D objects (original) (raw)

Using a Visual Attention Model to Improve Gaze Tracking Systems in Interactive 3D Applications

2010

This paper introduces the use of a visual attention model to improve the accuracy of gaze tracking systems. Visual attention models simulate the selective attention part of the human visual system. For instance, in a bottom-up approach, a saliency map is defined for the image and gives an attention weight to every pixel of the image as a function of its color, edge or intensity. Our algorithm uses an uncertainty window, defined by the gaze tracker accuracy, and located around the gaze point given by the tracker. Then, using a visual attention model, it searches for the most salient points, or objects, located inside this uncertainty window, and determines a novel, and hopefully, better gaze point. This combination of a gaze tracker together with a visual attention model is considred the main contribution of the paper. We demonstrate the promising results of our method by presenting two experiments conducted in two different contexts: (1) a free exploration of a visually rich 3D virtual environment without a specific task, and (2) a video game based on gaze tracking involving a selection task. Our approach can be used to improve real-time gaze tracking systems in many interactive 3D applications such as video games or virtual reality applications. The use of a visual attention model can be adapted to any gaze tracker and the visual attention model can also be adapted to the application in which it is used.

Visual attention using 2D & 3D displays

2015

In the past three decades, robotists and computer vision scientists, inspired by psychological and neurophysiological studies, have developed many computational models of attentions (CMAs) that mimic the behaviour of the human visual system in order to predict where humans will focus their attention. Most of CMA research has been focussing on the visual perception of images and videos displayed on 2D screens. There has recently, however, been a surge in devices that can display media in 3D and CMAs in this domain are becoming increasingly important. Research in this context is minimal, however. This thesis attempts to alleviate this problem. We explore the Graph-Based Visual Saliency algorithm [68] and extend it into 3D by developing a new depth incorporation method. We also propose a new online eye tracker calibration procedure that is more accurate and faster than standard processes and is also able to give confidence values associated with each eye position reading. Eye tracking ...

Mesh saliency and human eye fixations

ACM Transactions on Applied Perception, 2010

Mesh saliency has been proposed as a computational model of perceptual importance for meshes, and it has been used in graphics for abstraction, simplification, segmentation, illumination, rendering, and illustration. Even though this technique is inspired by models of low-level human vision, it has not yet been validated with respect to human performance. Here, we present a user study that compares the previous mesh saliency approaches with human eye movements. To quantify the correlation between mesh saliency and fixation locations for 3D rendered images, we introduce the normalized chance-adjusted saliency by improving the previous chance-adjusted saliency measure. Our results show that the current computational model of mesh saliency can model human eye movements significantly better than a purely random model or a curvature-based model.

3D Attentional Maps: Aggregated Gaze Visualizations In Three-Dimensional Virtual Environments

Proceedings of the International …, 2010

Gaze visualizations hold the potential to facilitate usability studies of interactive systems. However, visual gaze analysis in three-dimensional virtual environments still lacks methods and techniques for aggregating attentional representations. We propose three novel gaze visualizations for the application in such environments: projected, object-based, and surface-based attentional maps. These techniques provide an overview of how visual attention is distributed across a scene, among different models, and across a model's surface. Two user studies conducted among eye tracking and visualization experts approve the high value of these techniques for the fast evaluation of eye tracking studies in virtual environments.

Evaluation of perceptually-based selective rendering techniques using eye-movements analysis

Proceedings of the 22nd Spring Conference on Computer Graphics - SCCG '06, 2006

In recent years, models of the human visual system, in particular bottom-up and top-down visual attention processes, have become commonplace in the design of perceptually-based selective rendering algorithms. Although psychophysical experiments have been performed to assess the perceived quality of selectively rendered imagery, little work has focused on validating the correlation of predictive region-of-interests (ROIs) as determined by perceptuallybased metrics to the actual eye-movements of human observers. In this paper we present a novel eye tracking study that investigates how accurately ROIs predict where participants direct their eyes towards while watching an animation. Our experimental study investigated the validity of using saliency and task maps as ROI predictors. This study involved 64 participants in four conditions: participants performing a task, or free-viewing a scene, while being naive or informed about the purpose of the experiment. The informed participants knew that they were going to assess rendering quality. Our overall results indicate that the task map does act as good predictor of ROI.

Design and Application of Real-Time Visual Attention Model for the Exploration of 3D Virtual Environments

IEEE Transactions on Visualization and Computer Graphics, 2012

This paper studies the design and application of a novel visual attention model meant to compute users gaze position automatically, i.e. without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute, in real-time, a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive process which takes place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines the bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with more than 100% of accuracy gained. This suggests that computing in real-time a gaze point in a 3D virtual environment is possible and is a valid approach as compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multipletexture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high refresh rate. Second, we introduce the use of visual attention model in three visual effects inspired from the human visual system namely: depth-of-field blur, camera motions, and dynamic luminance. All these effects are computed based on simulated user's gaze, and are meant to improve user's sensations in future virtual reality applications. Index Terms-visual attention model, first person exploration, gaze tracking, visual effects, level of detail.

Saliency Based Illumination Control for Guiding User Attention in 3D Scenes

Mugla Journal of Science and Technology, 2021

Visual attention has a major impact on how we perceive 3D environments and saliency is a component of visual attention expressing how likely a scene or item is to capture our attention due to its apparent features. Saliency relies upon shape, shading, brightness, and other visual attributes of items. The saliency distribution of a visual field is influenced by the illumination of a scene, which has a significant impact on those visual properties. This work aims to control the saliency by manipulating the illumination parameters in a 3D scene. For this reason, given a sensible 3D scene, the light parameters that provide maximum saliency for the point of interest objects are investigated. In other words, we propose a method for task-aware automatic lighting setup. In this paper, 2D renderings of a 3D scene from various perspectives are considered, and the effects are analyzed in terms of saliency distribution under various lighting conditions. Also, for this process, different saliency estimation methods and calculations are investigated and eye tracker based user experiments are conducted to verify the results.

Analysis of Human Gaze Interactions with Texture and Shape

Computational Intelligence for Multimedia Understanding, 2012

Understanding of human perception of textured materials is one of the most difficult tasks of computer vision. In this paper we designed a strictly controlled psychophysical experiment with stimuli featuring different combinations of shape, illumination directions and surface texture. Appearance of five tested materials was represented by measured view and illumination dependent Bidirectional Texture Functions. Twelve subjects participated in visual search task -to find which of four identical three dimensional objects had its texture modified. We investigated the effect of shape and texture on subjects' attention. We are not looking at low level salience, as the task is to make a high level quality judgment. Our results revealed several interesting aspects of human perception of different textured materials and, surface shapes.