The Role of Graphical Feedback About Self-Movement when Receiving Objects in an Augmented Environment (original) (raw)

Grasping versus pointing and the differential use of visual feedback

Human Movement Science, 1993

Grasping versus pointing and the differential use of visual feedback. Human Movement Science 12. 219-234. The purpose of this study was to evaluate the speed at which visual feedback about changes in target position could be used to amend reaching movements when subjects pointed to targets or grasped objects. Subjects were required to point to or grasp a target which sometimes changed position unexpectedly upon hand-movement initiation. We found no evidence of early (before peak velocity) trajectory corrections when subjects pointed towards targets. When subjects were grasping perturbed targets, however, peak velocity was lower and was achieved more quickly than it was on trials in which the target did not change position. These results suggest that subjects were amending their grasping movements very early in the trajectory, indicating that visual information about changes in target position is used in a different manner when subjects intend to grasp an object as opposed to point to it. These results are discussed in terms of the function of reaching and grasping.

Reaching movements to augmented and graphic objects in virtual environments

Proceedings of the SIGCHI conference on Human factors in computing systems - CHI '01, 2001

This work explores how the availability of visual and haptic feedback affects the kinematics of reaching performance in a tabletop virtual environment. Eight subjects performed reach-to-grasp movements toward target objects of various sizes in conditions where visual and haptic feedback were either present or absent. It was found that movement time was slower when visual feedback of the moving limb was not available. Further MT varied systematically with target size when haptic feedback was available (i.e. augmented targets), and thus followed Fitts' law. However, movement times were constant regardless of target size when haptic feedback was removed. In depth analysis of the reaching kinematics revealed that subjects spent longer decelerating toward smaller targets in conditions where haptic feedback was available. In contrast, deceleration time was constant when haptic feedback was absent. These results suggest that visual feedback about the moving limb and veridical haptic feedback about object contact are extremely important for humans to effectively work in virtual environments.

Perceiving one’s own movements when using a tool

The present study examined what participants perceive of their hand movements when using a tool. In the experiments different gains for either the x-axis or the y-axis perturbed the relation between hand movements on a digitizer tablet and cursor movements on a display. As a consequence of the perturbation participants drew circles on the display while their covered hand movements followed either vertical or horizontal ellipses on the digitizer tablet. When asked to evaluate their hand movements, participants were extremely uncertain about their trajectories. By varying the amount of visual feedback, findings indicated that the low awareness of one’s own movements originated mainly from an insufficient quality of the humans’ tactile and proprioceptive system or from an insufficient spatial reconstruction of this information in memory.

The effects of tactile feedback and movement alteration on interaction and awareness with digital embodiments

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '13, 2013

Collaborative tabletop systems can employ direct touch, where people's real arms and hands manipulate objects, or indirect input, where people are represented on the table with digital embodiments. The input type and the resulting embodiment dramatically influence tabletop interaction: in particular, the touch avoidance that naturally governs people's touching and crossing behavior with physical arms is lost with digital embodiments. One result of this loss is that people are less aware of each others' arms, and less able to coordinate actions and protect personal territories. To determine whether there are strategies that can influence group interaction on shared digital tabletops, we studied augmented digital arm embodiments that provide tactile feedback or movement alterations when people touched or crossed arms. The study showed that both augmentation types changed people's behavior (people crossed less than half as often) and also changed their perception (people felt more aware of the other person's arm, and felt more awkward when touching). This work shows how groupware designers can influence people's interaction, awareness, and coordination abilities when physical constraints are absent.

More than Speed? An Empirical Study of Touchscreens and Body Awareness on an Object Manipulation Task

Lecture Notes in Computer Science, 2011

Touchscreen interfaces do more than allow users to execute speedy interactions. Three interfaces (touchscreen, mouse-drag, on-screen button) were used in the service of performing an object manipulation task. Results showed that planning time was shortest with touch screens, that touchscreens allowed high action knowledge users to perform the task more efficiently, and that only with touchscreens was the ability to rotate the object the same across all axes of rotation. The concept of closeness is introduced to explain the potential advantages of touchscreen interfaces.

Constraints and Principles for the Design of Human-Machine Interfaces: A Virtual Reality Approach

2016

We conducted two experiments to compare the visual frames of reference used to scale grasping movements directed at objects with those used to estimate the size of the same objects-- either immediately or after a 5-s delay. A virtual “workbench ” was employed for presenting two different-sized objects in 3D. Subjects were instructed to pick up or estimate the marked one of the two objects. We found that the presence of the other object affected not only the estimate of the size of the target object when subjects made their estimates both immediately and after a 5-s delay, but also the scaling of grip aperture in flight when subjects picked up the target object after a 5-s delay. However, when subjects picked up the target object immediately, their grasp was scaled to the actual size of the target object and was not influenced by the presence of the other object. These findings suggest that the control of delayed motor actions utilizes the same relative metrics in allocentric frames ...

Effects of sensorimotor transformations with graphical input devices

The impact of sensorimotor transformations with graphical input devices is surveyed with regard to action control. Recent evidence lets us assume that the distal action effect (the moving cursor) rather than the proximal action effect (the moving hand) determines the efficiency of tool use. In Experiment 1, different gains were explored with a touchpad and a mini-joystick. In correspondence with our assumptions the results revealed evidence that Fitts’ law holds for distal action–effect movements, but less for proximal action–effect movements. Most importantly, this was not only true for the touchpad but also for the mini-joystick. We further found a more efficient use of the touchpad in comparison to the mini-joystick when a high gain was used. In Experiment 2, the dominance of the action effect on motor control was confirmed in an experiment with a digitiser tablet. The tablet amplitude was held constant, but again, movement times followed the perceived index of difficulty on the display. It is concluded that Fitts’ law did not rely on the movements of the motor system, but on the distal action effects on the display (changes in visual space). Distal action–effect control plays an important role in understanding the constraints of the acquisition and application of tool transformations.

The effects of movement distance and movement time on visual feedback processing in aimed hand movements

Acta Psychologica, 1987

An experiment is reported that manipulated movement distance, movement time and the availability of vision (Light-On versus Light-Off) on the accuracy of aimed hand movements. There was a constant difference in spatial accuracy between Light-On and Light-off conditions as a function of distance when the duration of the movement was 200 msec; when the duration of the aimed hand movement was 400 or 600 msec the difference in spatial accuracy between Light-On and Light-Off conditions increased as distance increased. These results were taken as support for a two-process model of visual feedback processing in aimed hand movements, and provide converging evidence for the rapid visual feedback processing results of .

Effect of visual and haptic feedback on grasping movements.

Perceptual estimates of three-dimensional (3D) properties, such as the distance and depth of an object, are often inaccurate. Given the accuracy and ease with which we pick up objects, it may be expected that perceptual distortions do not affect how the brain processes 3D information for reach-to-grasp movements. Nonetheless, empirical results show that grasping accuracy is reduced when visual feedback of the hand is removed. Here we studied whether specific types of training could correct grasping behavior to perform adequately even when any form of feedback is absent. Using a block design paradigm, we recorded the movement kinematics of subjects grasping virtual objects located at different distances in the absence of visual feedback of the hand and haptic feedback of the object, before and after different training blocks with different feedback combinations (vision of the thumb and vision of thumb and index finger, with and without tactile feedback of the object). In the Pretraining block, we found systematic biases of the terminal hand position, the final grip aperture, and the maximum grip aperture like those reported in perceptual tasks. Importantly, the distance at which the object was presented modulated all these biases. In the Posttraining blocks only the hand position was partially adjusted, but final and maximum grip apertures remained unchanged. These findings show that when visual and haptic feedback are absent systematic distortions of 3D estimates affect reach-to-grasp movements in the same way as they affect perceptual estimates. Most importantly, accuracy cannot be learned, even after extensive training with feedback.

Effect of Tactual Feedback on Performance in Virtual Manipulation Tasks

The aim of the reported experiment was to investigate the effect of tactual feedback of human performance in virtual manipulation tasks. We compared performance on a pick-and-place task using the ITR device from Interface Technology Research Limit (UK) and using the LRP force feedback glove (LRP FFG). The ITR device provides binary force information (i.e grasped/not grasped) while the LRP glove provides force feedback to 14 hand locations. Two objects were used : a cube and a cylinder. The experimental conditions were the following : "RWC" condition (manipulation using bare hands), "ITR" condition (manipulation using the ITR device), "GLOVE" condition (manipulation using the LRP FFG without force feedback), and "LRP" condition (manipulation using the LRP FFG). Results showed that experimental condition has a significant effect of task completion time. We observed that the ITR device led to lower completion time that the LRP FFG. Results also showed that force feedback allowed more accurate placements (position and rotation) of the cube (whatever the device used). No effect was found concerning cylinder placement.