Information Encoding and Reconstruction by Phase Coding of Spikes (original) (raw)

Chapter 23: Information encoding and reconstruction by phase coding of spikes

Each part of the central nervous system communicates with the others by means of action potentials sent through parallel pathways. Despite the progressively increasing spatial and temporal variation added to the pattern of action potentials at each level of sensory processing, the integrity of information is retained with high precision. What is the mechanism that enables the precise decoding of these action potentials? This chapter is devoted to explaining the transformations of sensory input when information is encoded and decoded in the cortical circuitries. To unravel the full complexity of the problem, we discuss the following questions: Which features of the action potential patterns encode information? What is the relationship between action potentials and oscillations in the brain? What is the segmentation principle of spike processes? How is the precise spatio-temporal pattern of sensory information retained after multiple convergent synaptic transmissions? Is compression involved in the neural information transfer? If so, how is that compressed information decoded in cortical columns? What is the role of gamma oscillations in information encoding and decoding? How are time and space encoded? We illustrate these problems through the example of visual information processing. We contend that phase coding not only answers all these questions, but also provides an efficient, flexible and biologically plausible model for neural computation. We argue that it is timely to begin thinking of the fundamentals of neural coding in terms of the integration of action potentials and oscillations, which, respectively, constitute the discrete and continuous aspects of neural computation.

Coding with spike shapes and graded potentials in cortical networks

BioEssays, 2007

In cortical neurones, analogue dendritic potentials are thought to be encoded into patterns of digital spikes. According to this view, neuronal codes and computations are based on the temporal patterns of spikes: spike times, bursts or spike rates. Recently, we proposed an 'action potential waveform code' for cortical pyramidal neurones in which the spike shape carries information. Broader somatic action potentials are reliably produced in response to higher conductance input, allowing for four times more information transfer than spike times alone. This information is preserved during synaptic integration in a single neurone, as back-propagating action potentials of diverse shapes differentially shunt incoming postsynaptic potentials and so participate in the next round of spike generation. An open question has been whether the information in action potential waveforms can also survive axonal conduction and directly influence synaptic transmission to neighbouring neurones. Several new findings have now brought new light to this subject, showing cortical information processing that transcends the classical models.

Phase-of-firing coding of natural visual stimuli in primary visual cortex

We investigated the hypothesis that neurons encode rich naturalistic stimuli in terms of their spike times relative to the phase of ongoing network fluctuations rather than only in terms of their spike count. We recorded local field potentials (LFPs) and multiunit spikes from the primary visual cortex of anaesthetized macaques while binocularly presenting a color movie. We found that both the spike counts and the low-frequency LFP phase were reliably modulated by the movie and thus conveyed information about it. Moreover, movie periods eliciting higher firing rates also elicited a higher reliability of LFP phase across trials. To establish whether the LFP phase at which spikes were emitted conveyed visual information that could not be extracted by spike rates alone, we compared the Shannon information about the movie carried by spike counts to that carried by the phase of firing. We found that at low LFP frequencies, the phase of firing conveyed 54% additional information beyond that conveyed by spike counts. The extra information available in the phase of firing was crucial for the disambiguation between stimuli eliciting high spike rates of similar magnitude.

Binding by asynchrony: the neuronal phase code

Neurons display continuous subthreshold oscillations and discrete action potentials (APs). When APs are phase-locked to the subthreshold oscillation, we hypothesize they represent two types of information: the presence/absence of a sensory feature and the phase of subthreshold oscillation. If subthreshold oscillation phases are neuron-specific, then the sources of APs can be recovered based on the AP times. If the spatial information about the stimulus is converted to AP phases, then APs from multiple neurons can be combined into a single axon and the spatial configuration reconstructed elsewhere. For the reconstruction to be successful, we introduce two assumptions: that a subthreshold oscillation field has a constant phase gradient and that coincidences between APs and intracellular subthreshold oscillations are neuron-specific as defined by the "interference principle. " Under these assumptions, a phase-coding model enables information transfer between structures and reproduces experimental phenomenons such as phase precession, grid cell architecture, and phase modulation of cortical spikes. This article reviews a recently proposed neuronal algorithm for information encoding and decoding from the phase of APs (Nadasdy, 2009). The focus is given to the principles common across different systems instead of emphasizing system specific differences.

Information encoding and reconstruction from the phase of action potentials

Frontiers in Systems Neuroscience, 2009

provide a unifi ed framework to answer a diverse set of daunting problems (see Table 1 in Supplementary Material). The indirect evidence for the tight relationship between AP timing and SMO derives from observations of high correlation between extracellular APs and LFP. High coherency between APs and LFP oscillations is predominant at the gamma and theta frequency bands in the awake brain (

phase-coding2009.pdf

Fundamental questions in neural coding are how neurons encode, transfer, and reconstruct information from the pattern of action potentials (APs) exchanged between different brain structures. We propose a general model of neural coding where neurons encode information by the phase of their APs relative to their subthreshold membrane oscillations. We demonstrate by means of simulations that AP phase retains the spatial and temporal content of the input under the assumption that the membrane potential oscillations are coherent across neurons and between structures and have a constant spatial phase gradient. The model explains many unresolved physiological observations and makes a number of concrete, testable predictions about the relationship between APs, local fi eld potentials, and subthreshold membrane oscillations, and provides an estimate of the spatio-temporal precision of neuronal information processing.

Information theory and neural coding

Nature neuroscience, 1999

Information theory quantifies how much information a neural response carries about the stimulus. This can be compared to the information transferred in particular models of the stimulus-response function and to maximum possible information transfer. Such comparisons are crucial because they validate assumptions present in any neurophysiological analysis. Here we review information-theory basics before demonstrating its use in neural coding. We show how to use information theory to validate simple stimulus-response models of neural coding of dynamic stimuli. Because these models require specification of spike timing precision, they can reveal which time scales contain information in neural coding. This approach shows that dynamic stimuli can be encoded efficiently by single neurons and that each spike contributes to information transmission. We argue, however, that the data obtained so far do not suggest a temporal code, in which the placement of spikes relative to each other yields ...

The neuronal encoding of information in the brain

Progress in Neurobiology, 2011

We describe the results of quantitative information theoretic analyses of neural encoding, particularly in the primate visual, olfactory, taste, hippocampal, and orbitofrontal cortex. Most of the information turns out to be encoded by the firing rates of the neurons, that is by the number of spikes in a short time window. This has been shown to be a robust code, for the firing rate representations of different neurons are close to independent for small populations of neurons. Moreover, the information can be read fast from such encoding, in as little as 20 ms. In quantitative information theoretic studies, only a little additional information is available in temporal encoding involving stimulus-dependent synchronization of different neurons, or the timing of spikes within the spike train of a single neuron. Feature binding appears to be solved by feature combination neurons rather than by temporal synchrony. The code is sparse distributed, with the spike firing rate distributions close to exponential or gamma. A feature of the code is that it can be read by neurons that take a synaptically weighted sum of their inputs. This dot product decoding is biologically plausible. Understanding the neural code is fundamental to understanding not only how the cortex represents, but also processes, information.

Phase Synchronization Motion and Neural Coding in Dynamic Transmission of Neural Information

IEEE Transactions on Neural Networks, 2011

In order to explore the dynamic characteristics of neural coding in the transmission of neural information in the brain, a model of neural network consisting of three neuronal populations is proposed in this paper using the theory of stochastic phase dynamics. Based on the model established, the neural phase synchronization motion and neural coding under spontaneous activity and stimulation are examined, for the case of varying network structure. Our analysis shows that, under the condition of spontaneous activity, the characteristics of phase neural coding are unrelated to the number of neurons participated in neural firing within the neuronal populations. The result of numerical simulation supports the existence of sparse coding within the brain, and verifies the crucial importance of the magnitudes of the coupling coefficients in neural information processing as well as the completely different information processing capability of neural information transmission in both serial and parallel couplings. The result also testifies that under external stimulation, the bigger the number of neurons in a neuronal population, the more the stimulation influences the phase synchronization motion and neural coding evolution in other neuronal populations. We verify numerically the experimental result in neurobiology that the reduction of the coupling coefficient between neuronal populations implies the enhancement of lateral inhibition function in neural networks, with the enhancement equivalent to depressing neuronal excitability threshold. Thus, the neuronal populations tend to have a stronger reaction under the same stimulation, and more neurons get excited, leading to more neurons participating in neural coding and phase synchronization motion. Index Terms-Average number density, coupled neural network model, in phase neural coding, neuronal population, synchronized motion. I. INTRODUCTION N EURAL information processing and neural information evolution can be studied using the theory of phase dynamics, which can describe the neural activity of a large neuronal population, reveal the dynamics of neural information processing (e.g., synchronous oscillation, dynamic coupling, rapid convergence), as well as express neuronal plasticity. Numerous reports on this topic have been published [1]-[8], [9], and many of them have been brought to the attention of Manuscript

Reading Neural Encodings using Phase Space Methods

Perspectives and Problems in Nolinear Science, 2003

Environmental signals sensed by nervous systems are often represented in spike trains carried from sensory neurons to higher neural functions where decisions and functional actions occur. Information about the environmental stimulus is contained (encoded) in the train of spikes. We show how to "read" the encoding using state space methods of nonlinear dynamics. We create a mapping from spike signals which are output from the neural processing system back to an estimate of the analog input signal. This mapping is realized locally in a reconstructed state space embodying both the dynamics of the source of the sensory signal and the dynamics of the neural circuit doing the processing. We explore this idea using a Hodgkin-Huxley conductance based neuron model and input from a low dimensional dynamical system, the Lorenz system. We show that one may accurately learn the dynamical input/output connection and estimate with high precision the details of the input signals from spike timing output alone. This form of "reading the neural code" has a focus on the neural circuitry as a dynamical system and emphasizes how one interprets the dynamical degrees of freedom in the neural circuit as they transform analog environmental information into spike trains.