Multimodal integration in rostral fastigial nucleus provides an estimate of body movement (original) (raw)

Internal models of self-motion: computations that suppress vestibular reafference in early vestibular processing

Experimental Brain Research, 2011

In everyday life, vestibular sensors are activated by both self-generated and externally applied head movements. The ability to distinguish inputs that are a consequence of our own actions (i.e., active motion) from those that result from changes in the external world (i.e., passive or unexpected motion) is essential for perceptual stability and accurate motor control. Recent work has made progress toward understanding how the brain distinguishes between these two kinds of sensory inputs. We have performed a series of experiments in which single-unit recordings were made from vestibular afferents and central neurons in alert macaque monkeys during rotation and translation. Vestibular afferents showed no differences in firing variability or sensitivity during active movements when compared to passive movements. In contrast, the analyses of neuronal firing rates revealed that neurons at the first central stage of vestibular processing (i.e., in the vestibular nuclei) were effectively less sensitive to active motion. Notably, however, this ability to distinguish between active and passive motion was not a general feature of early central processing, but rather was a characteristic of a distinct group of neurons known to contribute to postural control and spatial orientation. Our most recent studies have addressed how vestibular and proprioceptive inputs are integrated in the vestibular cerebellum, a region likely to be involved in generating an internal model of self-motion. We propose that this multimodal integration within the vestibular cerebellum is required for eliminating self-generated vestibular information from the subsequent computation of orientation and posture control at the first central stage of processing.

The vestibular system: multimodal integration and encoding of self-motion for motor control

Trends in Neurosciences, 2012

Understanding how sensory pathways transmit information under natural conditions remains a major goal in neuroscience. The vestibular system plays a vital role in everyday life, contributing to a wide range of functions from reflexes to the highest levels of voluntary behavior. Recent experiments establishing that vestibular (self-motion) processing is inherently multimodal also provide insight into a set of interrelated questions: What neural code is used to represent sensory information in vestibular pathways? How does the organism's interaction with the environment shape encoding? How is self-motion information processing adjusted to meet the needs of specific tasks? This review highlights progress that has recently been made towards understanding how the brain encodes and processes self-motion to ensure accurate motor control.

Identification of head motions by central vestibular neurons receiving linear and angular input

Biological Cybernetics, 1999

Most naturally occurring displacements of the head in space, due to either an external perturbation of the body or a self-generated, volitional head movement, apply both linear and angular forces to the head. The vestibular system detects linear and angular accelerations of the head separately, but the succeeding control of gaze and posture often relies upon the combined processing of linear and angular motion information. Thus, the output of a secondary neuron may re¯ect the linear, the angular, or both components of the head motion. Although the vestibular system is typically studied in terms of separate responses to linear and angular acceleration of the head, many secondary and higher-order neurons in the vestibular system do, in fact, receive information from both sets of motion sensors. The present paper develops methods to analyze responses of neurons that receive both types of information, and focuses on responses to sinusoidal motions composed of a linear and an angular component. We show that each neuron has a preferred motion, but a single neuron cannot code for a single motion. However, a pair of neurons can code for a motion by the relative phases of ®ring-rate modulation. In this way, information about motion is enhanced by neurons combining information about linear and angular motion.

Early vestibular processing does not distinguish active from passive self-motion if there is a discrepancy between predicted and actual proprioceptive feedback

Journal of Neurophysiology, 2014

Most of our sensory experiences are gained by active exploration of the world. While the ability to distinguish sensory inputs resulting of our own actions (termed reafference) from those produced externally (termed exafference) is well established, the neural mechanisms underlying this distinction are not fully understood. We have previously proposed that vestibular signals arising from self-generated movements are inhibited by a mechanism that compares the internal prediction of the proprioceptive consequences of self-motion to the actual feedback. Here we directly tested this proposal by recording from single neurons in monkey during vestibular stimulation that was externally produced and/or self-generated. We show for the first time that vestibular reafference is equivalently canceled for self-generated sensory stimulation produced by activation of the neck musculature (head-on-body motion), or axial musculature (combined head and body motion), when there is no discrepancy betwe...

Early vestibular processing does not discriminate active from passive self-motion if there is a discrepancy between predicted and actual proprioceptive feedback

Journal of Neurophysiology, 2014

Most of our sensory experiences are gained by active exploration of the world. While the ability to distinguish sensory inputs resulting of our own actions (termed reafference) from those produced externally (termed exafference) is well established, the neural mechanisms underlying this distinction are not fully understood. We have previously proposed that vestibular signals arising from self-generated movements are inhibited by a mechanism that compares the internal prediction of the proprioceptive consequences of self-motion to the actual feedback. Here we directly tested this proposal by recording from single neurons in monkey during vestibular stimulation that was externally produced and/or self-generated. We show for the first time that vestibular reafference is equivalently canceled for self-generated sensory stimulation produced by activation of the neck musculature (head-on-body motion), or axial musculature (combined head and body motion), when there is no discrepancy betwe...

How Vestibular Neurons Solve the Tilt/Translation Ambiguity

Annals of the New York Academy of Sciences, 2009

The peripheral vestibular system is faced by a sensory ambiguity, where primary otolith afferents respond identically to translational (inertial) accelerations and changes in head orientation relative to gravity. Under certain conditions, this sensory ambiguity can be resolved using extra-otolith cues, including semicircular canal signals. Here we review and summarize how neurons in the vestibular nuclei, rostral fastigial nuclei, cerebellar nodulus/uvula, and thalamus respond during combinations of tilt and translation. We focus primarily on cerebellar cortex responses, as nodulus/uvula Purkinje cells reliably encode translation rather than net gravito-inertial acceleration. In contrast, neurons in the vestibular and rostral fastigial nuclei, as well as the ventral lateral and ventral posterior nuclei of the thalamus represent a continuum, with some encoding translation and some net gravito-inertial acceleration. This review also outlines how Purkinje cells use semicircular canal signals to solve the ambiguity problem and how this solution fails at low frequencies. We conclude by attempting to bridge the gap between the proposed roles of nodulus/ uvula in tilt/translation discrimination and velocity storage.

Purkinje Cells in Posterior Cerebellar Vermis Encode Motion in an Inertial Reference Frame

Neuron, 2007

The ability to orient and navigate through the terrestrial environment represents a computational challenge common to all vertebrates. It arises because motion sensors in the inner ear, the otolith organs, and the semicircular canals transduce self-motion in an egocentric reference frame. As a result, vestibular afferent information reaching the brain is inappropriate for coding our own motion and orientation relative to the outside world. Here we show that cerebellar cortical neuron activity in vermal lobules 9 and 10 reflects the critical computations of transforming head-centered vestibular afferent information into earth-referenced selfmotion and spatial orientation signals. Unlike vestibular and deep cerebellar nuclei neurons, where a mixture of responses was observed, Purkinje cells represent a homogeneous population that encodes inertial motion. They carry the earth-horizontal component of a spatially transformed and temporally integrated rotation signal from the semicircular canals, which is critical for computing head attitude, thus isolating inertial linear accelerations during navigation.

Internal models and neural computation in the vestibular system

Experimental Brain Research, 2009

The vestibular system is vital for motor control and spatial self-motion perception. Afferents from the otolith organs and the semicircular canals converge with optokinetic, somatosensory and motor-related signals in the vestibular nuclei, which are reciprocally interconnected with the vestibulocerebellar cortex and deep cerebellar nuclei. Here, we review the properties of the many cell types in the vestibular nuclei, as well as some fundamental computations implemented within this brainstem-cerebellar circuitry. These include the sensorimotor transformations for reflex generation, the neural computations for inertial motion estimation, the distinction between active and passive head movements, as well as the integration of vestibular and proprioceptive information for body motion estimation. A common theme in the solution to such computational problems is the concept of internal models and their neural implementation. Recent studies have shed new insights into important organizational principles that closely resemble those proposed for other sensorimotor systems, where their neural basis has often been more difficult to identify. As such, the vestibular system provides an excellent model to explore common neural processing strategies relevant both for reflexive and for goal-directed, voluntary movement as well as perception. Keywords Vestibular Á Computation Á Internal model Á Reference frame transformation Á Eye movement Á Motor control Á Sensorimotor Á Reafference Á Motion estimation Abbreviations VOR Vestibulo-ocular reflex RVOR Rotational vestibulo-ocular reflex TVOR Translational vestibulo-ocular reflex VN Vestibular nuclei PH Prepositus hypoglossi rFN Rostral fastigial deep cerebellar nuclei NU Nodulus and ventral uvula regions of the caudal cerebellar vermis PH-BT ''Tonic'' and ''burst-tonic'' neurons in the PH and adjacent medial VN PVP ''Position-vestibular-pause'' VN cell type EH ''Eye-head'' VN cell type VO ''Vestibular-only'' VN cell type FTN ''Floccular-target-neuron'' VN cell type

Does the Middle Temporal Area Carry Vestibular Signals Related to Self-Motion?

Journal of Neuroscience, 2009

Recent studies have described vestibular responses in the dorsal medial superior temporal area (MSTd), a region of extrastriate visual cortex thought to be involved in self-motion perception. The pathways by which vestibular signals are conveyed to area MSTd are currently unclear, and one possibility is that vestibular signals are already present in areas that are known to provide visual inputs to MSTd. Thus, we examined whether selective vestibular responses are exhibited by single neurons in the middle temporal area (MT), a visual motion-sensitive region that projects heavily to area MSTd. We compared responses in MT and MSTd to three-dimensional rotational and translational stimuli that were either presented using a motion platform (vestibular condition) or simulated using optic flow (visual condition). When monkeys fixated a visual target generated by a projector, half of MT cells (and most MSTd neurons) showed significant tuning during the vestibular rotation condition. However, when the fixation target was generated by a laser in a dark room, most MT neurons lost their vestibular tuning whereas most MSTd neurons retained their selectivity. Similar results were obtained for free viewing in darkness. Our findings indicate that MT neurons do not show genuine vestibular responses to self-motion; rather, their tuning in the vestibular rotation condition can be explained by retinal slip due to a residual vestibulo-ocular reflex. Thus, the robust vestibular signals observed in area MSTd do not arise through inputs from area MT.

The neural encoding of self-motion

Current Opinion in Neurobiology, 2011

As we move through the world, information can be combined from multiple sources in order to allow us to perceive our selfmotion. The vestibular system detects and encodes the motion of the head in space. In addition, extra-vestibular cues such as retinal-image motion (optic flow), proprioception, and motor efference signals, provide valuable motion cues. Here I focus on the coding strategies that are used by the brain to create neural representations of self-motion. I review recent studies comparing the thresholds of single versus populations of vestibular afferent and central neurons. I then consider recent advances in understanding the brain's strategy for combining information from the vestibular sensors with extra-vestibular cues to estimate self-motion. These studies emphasize the need to consider not only the rules by which multiple inputs are combined, but also how differences in the behavioral context govern the nature of what defines the optimal computation.