State-Space Algorithms for Estimating Spike Rate Functions (original) (raw)

Statistical smoothing of neuronal data

Network: Computation in Neural Systems, 2003

The purpose of smoothing (filtering) neuronal data is to improve the estimation of the instantaneous firing rate. In some applications, scientific interest centres on functions of the instantaneous firing rate, such as the time at which the maximal firing rate occurs or the rate of increase of firing rate over some experimentally relevant period. In others, the instantaneous firing rate is needed for probability-based calculations. In this paper we point to the very substantial gains in statistical efficiency from smoothing methods compared to using the peristimulus-time histogram (PSTH), and we also demonstrate a new method of adaptive smoothing known as Bayesian adaptive regression splines (DiMatteo 1, Genovese C R and Kass R E 2001 Biometrika 88 1055-71). We briefly review additional applications of smoothing with non-Poisson processes and in the joint PSTH for a pair of neurons.

Modelling spike trains and extracting response latency with Bayesian binning

2009

The peristimulus time histogram (PSTH) and the spike density function (SDF) are commonly used in the analysis of neurophysiological data. The PSTH is usually obtained by binning spike trains, the SDF being a (Gaussian) kernel smoothed version of the PSTH. While selection of the bin width or kernel size is often relatively arbitrary there have been recent attempts to remedy this situation (Shimazaki and Shinomoto, 2007c,b,a). We further develop an exact Bayesian generative model approach to estimating PSTHs and demonstate its superiority to competing methods using data from early (LGN) and late (STSa) visual areas. We also highlight the advantages of our scheme's automatic complexity control and generation of error bars. Additionally, our approach allows extraction of excitatory and inhibitory response latency from spike trains in a principled way, both on repeated and single trial data. We show that the method can be applied to data with high background firing rates and inhibitory responses (LGN) as well as to data with low firing rate and excitatory responses (STSa). Furthermore, we demonstrate on simulated data that our latency extraction method works for a range of signal-to-noise ratios and background firing rates. While further studies are needed to examine the sensitivity of our method to, for example, gradual changes in firing rate and adaptation, the current results suggest that Bayesian binning is a powerful method for the estimation of firing rate and the extraction response latency from neuronal spike trains.

Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes

Neural Information Processing Systems, 2007

Neural spike trains present challenges to analytical efforts due to their noisy, spiking nature. Many studies of neuroscientific and neural prosthetic importance rely on a smoothed, denoised estimate of the spike train's underlying firing rate. Current techniques to find time-varying firing rates require ad hoc choices of parameters, offer no confidence intervals on their estimates, and can obscure potentially important single trial variability. We present a new method, based on a Gaussian Process prior, for inferring probabilistically optimal estimates of firing rate functions underlying single or multiple neural spike trains. We test the performance of the method on simulated data and experimentally gathered neural spike trains, and we demonstrate improvements over conventional estimators.

Estimating a dynamic state to relate neural spiking activity to behavioral signals during cognitive tasks

2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2015

An important question in neuroscience is understanding the relationship between high-dimensional electrophysiological data and complex, dynamic behavioral data. One general strategy to address this problem is to define a low-dimensional representation of essential cognitive features describing this relationship. Here we describe a general statespace method to model and fit a low-dimensional cognitive state process that allows us to relate behavioral outcomes of various tasks to simultaneously recorded neural activity across multiple brain areas. In particular, we apply this model to data recorded in the lateral prefrontal cortex (PFC) and caudate nucleus of non-human primates as they perform learning and adaptation in a rule-switching task. First, we define a model for a cognitive state process related to learning, and estimate the progression of this learning state through the experiments. Next, we formulate a point process generalized linear model to relate the spiking activity of each PFC and caudate neuron to

An Empirical Model for Reliable Spiking Activity

Neural computation, 2015

Understanding a neuron's transfer function, which relates a neuron's inputs to its outputs, is essential for understanding the computational role of single neurons. Recently, statistical models, based on point processes and using generalized linear model (GLM) technology, have been widely applied to predict dynamic neuronal transfer functions. However, the standard version of these models fails to capture important features of neural activity, such as responses to stimuli that elicit highly reliable trial-to-trial spiking. Here, we consider a generalization of the usual GLM that incorporates nonlinearity by modeling reliable and nonreliable spikes as being generated by distinct stimulus features. We develop and apply these models to spike trains from olfactory bulb mitral cells recorded in vitro. We find that spike generation in these neurons is better modeled when reliable and unreliable spikes are considered separately and that this effect is most pronounced for neurons wi...

Neuronal Spike Train Analysis in Likelihood Space

PLoS ONE, 2011

Background: Conventional methods for spike train analysis are predominantly based on the rate function. Additionally, many experiments have utilized a temporal coding mechanism. Several techniques have been used for analyzing these two sources of information separately, but using both sources in a single framework remains a challenging problem. Here, an innovative technique is proposed for spike train analysis that considers both rate and temporal information. Methodology/Principal Findings: Point process modeling approach is used to estimate the stimulus conditional distribution, based on observation of repeated trials. The extended Kalman filter is applied for estimation of the parameters in a parametric model. The marked point process strategy is used in order to extend this model from a single neuron to an entire neuronal population. Each spike train is transformed into a binary vector and then projected from the observation space onto the likelihood space. This projection generates a newly structured space that integrates temporal and rate information, thus improving performance of distribution-based classifiers. In this space, the stimulus-specific information is used as a distance metric between two stimuli. To illustrate the advantages of the proposed technique, spiking activity of inferior temporal cortex neurons in the macaque monkey are analyzed in both the observation and likelihood spaces. Based on goodness-of-fit, performance of the estimation method is demonstrated and the results are subsequently compared with the firing rate-based framework. Conclusions/Significance: From both rate and temporal information integration and improvement in the neural discrimination of stimuli, it may be concluded that the likelihood space generates a more accurate representation of stimulus space. Further, an understanding of the neuronal mechanism devoted to visual object categorization may be addressed in this framework as well.

Learning In Spike Trains: Estimating Within-Session Changes In Firing Rate Using Weighted Interpolation

2016

The electrophysiological study of learning is hampered by modern procedures for estimating firing rates: Such procedures usually require large datasets, and also require that included trials be functionally identical. Unless a method can track the real-time dynamics of how firing rates evolve, learning can only be examined in the past tense. We propose a quantitative procedure, called ARRIS, that can uncover trial-by-trial firing dynamics. ARRIS provides reliable estimates of firing rates based on small samples using the reversible-jump Markov chain Monte Carlo algorithm. Using weighted interpolation, ARRIS can also provide estimates that evolve over time. As a result, both real-time estimates of changing activity, and of task-dependent tuning, can be obtained during the initial stages of learning.

Parametric estimation of spike train statistics

BMC Neuroscience, 2009

We review here the basics of the formalism of Gibbs distributions and its numerical implementation, (its details published elsewhere [1], in order to characterizing the statistics of multi-unit spike trains. We present this here with the aim to analyze and modeling synthetic data, especially bio-inspired simulated data e.g. from Virtual Retina [2], but also experimental data Multi-Electrode-Array(MEA) recordings from retina obtained by Adrian Palacios. We remark that Gibbs distribution allow us to estimate the spike statistics, given a design choice, but also to compare different models, thus answering comparative questions about the neural code.