Learning a new visuomotor transformation: error correction and generalization (original) (raw)

Generalisation of adaptation to a visuomotor rotation from curved to straight line reaching

2009

Numerous studies have investigated motor learning by examining the adaptation of reaching movements to visuomotor perturbations that alter the mapping between actual and visually perceived hand position. The picture of the visuomotor transformation from visual input to motor input that has developed consists of three broad phases: integration of hand and target locations in a common reference frame, calculation of a movement vector between hand and target, and transformation of this movement vector from the common reference frame into motor commands. The process of adapting to a visuomotor rotation is generally viewed as an alteration of the vectorial representation of reach planning. When visual feedback is rotated, the motor and visual directions no longer coincide and the motor command executed is remapped to the subsequent visual direction produced. In the current set of studies, we examined how learning a visuomotor rotation while reaching to a target with a curved hand path generalizes to straight path reaching and novel target directions. We found that there is very little to no generalization of learning between curved reaches and straight reaches when given only endpoint feedback. With continuous visual feedback, we found partial transfer. This suggests that in the absence of visual feedback, the vectorial adaptation hypothesis is insufficient and adaptation to a visuomotor rotation is mediated by the later stages of the visuomotor transformation, when the motor commands specific to the hand path used are being generated.

Learning a visuomotor transformation in a local area of work space produces directional biases in other areas

Journal of Neurophysiology, 1995

1. The dependence of directional biases in reaching movements on the initial position of the hand was studied in normal human subjects moving their unseen hand on a horizontal digitizing tablet to visual targets displayed on a vertical computer screen. 2. When initial hand positions were to the right of midline, movements were systematically biased clockwise. Biases were counterclockwise for starting points to the left. Biases were unaffected by the screen location of the starting and target positions. 3. Vision of the hand in relation to the target before movement, as well as practice with vision of the cursor during the movement, temporarily eliminated these biases. The spatial organization of the biases suggests that, without vision of the limb, the nervous system underestimates the distance of the hand from an axis or plane that includes its most common operating location. 4. To test the hypothesis that such an underestimate might represent an adaptation to a local area of work ...

Learning of visuomotor transformations for vectorial planning of reaching trajectories

2000

The planning of visually guided reaches is accomplished by independent specification of extent and direction. We investigated whether this separation of extent and direction planning for well practiced movements could be explained by differences in the adaptation to extent and directional errors during motor learning. We compared the time course and generalization of adaptation with two types of screen cursor transformation that altered the relationship between hand space and screen space. The first was a gain change that induced extent errors and required subjects to learn a new scaling factor. The second was a screen cursor rotation that induced directional errors and required subjects to learn new reference axes. Subjects learned a new scaling factor at the same rate when training with one or multiple target distances, whereas learning new reference axes took longer and was less complete when training with multiple compared with one target direction. After training to a single target, subjects were able to transfer learning of a new scaling factor to previously unvisited distances and directions. In contrast, generalization of rotation adaptation was incomplete; there was transfer across distances and arm configurations but not across directions. Learning a rotated reference frame only occurred after multiple target directions were sampled during training. These results suggest the separate processing of extent and directional errors by the brain and support the idea that reaching movements are planned as a hand-centered vector whose extent and direction are established via learning a scaling factor and reference axes.

Frame of reference and adaptation to directional bias in a video-controlled reaching task

Ergonomics, 2002

The present study (N=56) investigated spatio-temporal accuracy of reaching movements controlled visually through a video monitor. Direct vision of the hand was precluded and the direction of hand trajectory, as perceived on the video screen, was varied by changing the angle of the camera. The orientation of the visual scene displayed on the fronto-parallel plane was thus congruent (0° condition) or non-congruent (directional bias of 15°, 30° or 45° counterclockwise) according to the horizontal working space. The goal of this study was to determine whether local learning of a directional bias can be transferred to other locations in the working space, but taking into account the magnitude of the directional bias (15°, 30° or 45°), and the position of the successive objectives (targets at different distances (TDD) or different azimuths (TDA)).

Analysis of Pointing Errors Reveals Properties of Data Representations and Coordinate Transformations Within the Central Nervous System

Neural Computation, 2000

The execution of a simple pointing task invokes a chain of processing that includes visual acquisition of the target, coordination of multimodal proprioceptive signals, and ultimately the generation of a motor command that will drive the finger to the desired target location. These processes in the sensorimotor chain can be described in terms of internal representations of the target or limb positions and coordinate transformations between different internal reference frames. In this article we first describe how different types of error analysis can be used to identify properties of the internal representations and coordinate transformations within the central nervous system. We then describe a series of experiments in which subjects pointed to remembered 3D visual targets under two lighting conditions (dim light and total darkness) and after two different memory delays (0.5 and 5.0 s) and report results in terms of variable error, constant error, and local distortion. Finally, we present a set of simulations to help explain the patterns of errors produced in this pointing task. These analyses and experiments provide insight into the structure of the underlying sensorimotor processes employed by the central nervous system.

Influence of disturbances on the control of PC-mouse, goal-directed arm movements

Medical Engineering & Physics, 2010

This study concerns the influence of visuomotor rotating disturbance on motion dynamics and brain activity. It involves using a PC-mouse and introducing a predefined bias angle between the direction of motion of the mouse pointer and that of the screen cursor. Subjects were asked to execute three different tasks, designed to study the effect of visuomotor rotation on direction control, extent control or the two together. During each task, mouse movement, screen cursor movement and electroencephalograph (EEG) signals were recorded. An algorithm was used to detect and discard EEG signals contaminated by artifacts. Movement performance indexes and brain activity are used to evaluate motion control, tracking ability, learning and control. The results suggest the direction control is planned before the movement and controlled by an adaptive control while extent control is controlled by a real-time feedback. The measurements also confirm that increased motion and/or brain activity occur for bias angles in the ranges ±(90-120 • ) for both direction and extension controls. After-effects when changing the angle of visual rotation have been seen to be proportional to the variation in the adaptation angle.

The role of peripheral and central visual information for the directional control of manual aiming movements

Canadian Journal of Experimental Psychology / Revue canadienne de psychologie expérimentale, 1999

Seeing one's hand in visual periphery has been shown to optimize the directional accuracy of a sweeping hand movement, which is consistent with Paillard's (1980; Paillard & Amblard, 1985) two-channels model of visual information processing. However, contrary to this model, seeing one's hand in central vision, even for a brief period of time, also resulted in optimal directional accuracy. One goal of the present study was to test two opposing hypotheses proposed to explain the latter finding. As a second goal, we wanted to determine whether additional support could be found for the existence of a visual kinetic channel. The results indicated that seeing one's hand in central vision, even for a very short delay, resulted in the same accuracy as being permitted to see one's hand for the duration of the whole movement. This suggests that seeing one's hand around the target might enable one to code its location and that of the target within a single frame of reference and, thus, facilitate movement planning. In addition, the results of the present study indicated that seeing one's hand in motion while in visual periphery permitted a better directional accuracy than when this information was not available. This suggests that the movement vector, which is planned prior to movement initiation, can be quickly updated following movement initiation.

Pointing Errors Reflect Biases in the Perception of the InitialHand Position

Journal of Neurophysiology, 1998

Vindras, Philippe, Michel Desmurget, Claude Prablanc, and Paolo Viviani. Pointing errors reflect biases in the perception of the initial hand position. J. Neurophysiol. 79: 3290–3294, 1998. By comparing the visuomotor performance of 10 adult, normal subjects in three tasks, we investigated whether errors in pointing movements reflect biased estimations of the hand starting position. In a manual pointing task with no visual feedback, subjects aimed at 48 targets spaced regularly around two starting positions. Nine subjects exhibited a similar pattern of systematic errors across targets, i.e., a parallel shift of the end points that accounted, on average, for 49% of the total variability. The direction of the shift depended on the starting location. Systematic errors decreased dramatically in the second condition where subjects were allowed to see their hand before movement onset. The third task was to use a joystick held by the left hand to estimate the location of their (unseen) rig...

Perception action interaction: the oblique effect in the evolving trajectory of arm pointing movements

Experimental Brain Research, 2008

In previous studies, we provided evidence for a directional distortion of the endpoints of movements to memorized target locations. This distortion was similar to a perceptual distortion in direction discrimination known as the oblique effect so we named it the ''motor oblique effect''. In this report we analyzed the directional errors during the evolution of the movement trajectory in memory guided and visually guided pointing movements and compared them with directional errors in a perceptual experiment of arrow pointing. We observed that the motor oblique effect was present in the evolving trajectory of both memory and visually guided reaching movements. In memory guided pointing the motor oblique effect did not disappear during trajectory evolution while in visually guided pointing the motor oblique effect disappeared with decreasing distance from the target and was smaller in magnitude compared to the perceptual oblique effect and the memory motor oblique effect early on after movement initiation. The motor oblique effect in visually guided pointing increased when reaction time was small and disappeared with larger reaction times. The results are best explained using the hypothesis that a low level oblique effect is present for visually guided pointing movements and this effect is corrected by a mechanism that does not depend on visual feedback from the trajectory evolution and might even be completed during movement planning. A second cognitive oblique effect is added in the perceptual estimation of direction and affects the memory guided pointing movements. It is finally argued that the motor oblique effect can be a useful probe for the study of perception-action interaction.

Frames of reference and control parameters in visuomanual pointing

Journal of Experimental Psychology: Human Perception and Performance, 1998

Three hypotheses concerning the control variables in visuomanual pointing were tested. Participants pointed to a visual target presented briefly in total darkness on the horizontal plane. The starting position of the hand alternated randomly among 4 points arranged as a diamond. Results show that during the experiment, movement drifted from hypometric to hypermetric. Final positions depended on the starting position. Theft average pattern reproduced the diamond of the starting points, either in same orientation (hypometric trials), or with a double inversion (hypermetric trials). The distribution of variable errors was elliptical, with the major axis aligned with the direction of the movement. Statistical analysis and Monte Carlo simulations showed that the results are incompatible with the final point control hypothesis (A. Polit & E. Bizzi, 1979). Better, but not fully satisfactory, agreement was found with the view that pointing involves comparing initial and desired postures (J. E Soechting & M. Flanders, 1989a). The hypothesis that accounted best for the results is that final hand position is coded as a vector represented in an extrinsic frame of reference centered on the hand. Reaching for an object, pressing a key, or pointing to a distant location are all familiar acts performed effortlessly under a variety of conditions and constraints. Yet, the underlying interplay between visual and motor mechanisms is still the subject of much debate (cf. Jeannerod, 1988). The key issue can be stated in relatively simple terms. On the one hand, despite eye, head, and body movements, vision affords a stable representation of the objects in the environment with respect to an extrinsic system of reference. On the other hand, kinesthesia affords information concerning the position of all body segments involved in manipulating, grasping, and pointing with respect to an intrinsic system of reference. Thus, the debate focuses on how this diverse information is set in register to establish a one-to-one correspondence between a posture and a spatial location. Most models of pointing derive from more general conceptions of movement control. One line of speculation (the equilibrium point hypothesis) holds that the motor plan involves the definition of a final stable posture. Along this line, the further strong suggestion has been made that this can be done disregarding the starting position of the limb (for a review, see Bizzi, Hogan, Mussa-Ivaldi, & Giszter,