Learning multiscale neural metrics via entropy minimization (original) (raw)

Information-theoretic metric learning: 2-D linear projections of neural data for visualization

2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2013

Intracortical neural recordings are typically highdimensional due to many electrodes, channels, or units and high sampling rates, making it very difficult to visually inspect differences among responses to various conditions. By representing the neural response in a low-dimensional space, a researcher can visually evaluate the amount of information the response carries about the conditions. We consider a linear projection to 2-D space that also parametrizes a metric between neural responses. The projection, and corresponding metric, should preserve class-relevant information pertaining to different behavior or stimuli. We find the projection as a solution to the information-theoretic optimization problem of maximizing the information between the projected data and the class labels. The method is applied to two datasets using different types of neural responses: motor cortex neuronal firing rates of a macaque during a center-out reaching task, and local field potentials in the somatosensory cortex of a rat during tactile stimulation of the forepaw. In both cases, projected data points preserve the natural topology of targets or peripheral touch sites. Using the learned metric on the neural responses increases the nearest-neighbor classification rate versus the original data; thus, the metric is tuned to distinguish among the conditions.

Neural Decoding with Kernel-Based Metric Learning

Neural Computation, 2014

In studies of the nervous system, the choice of metric for the neural responses is a pivotal assumption. For instance, a well-suited distance metric enables us to gauge the similarity of neural responses to various stimuli and assess the variability of responses to a repeated stimulus—exploratory steps in understanding how the stimuli are encoded neurally. Here we introduce an approach where the metric is tuned for a particular neural decoding task. Neural spike train metrics have been used to quantify the information content carried by the timing of action potentials. While a number of metrics for individual neurons exist, a method to optimally combine single-neuron metrics into multineuron, or population-based, metrics is lacking. We pose the problem of optimizing multineuron metrics and other metrics using centered alignment, a kernel-based dependence measure. The approach is demonstrated on invasively recorded neural data consisting of both spike trains and local field potential...

Measuring representational distances–the spike-train metrics approach

2010

A fundamental problem in studying population codes is how to compare population activity patterns. Population activity patterns are not just spatial, but spatiotemporal. Thus, a principled approach to the problem of the comparison of population activity patterns begins with the comparison of the temporal activity patterns of a single neuron, and then, to the extension of the scope of this comparison to populations spread across space.

Characterizing neural coding performance for populations of sensory neurons: comparing a weighted spike distance metrics to other analytical methods

The identity of sensory stimuli is encoded in the spatio-temporal patterns of responses of the neural population. For stimuli to be discriminated reliably, differences in population responses must be accurately decoded by downstream networks. Several methods to compare the pattern of responses and their differences have been used by neurophysiologist to characterize the accuracy of the sensory responses studied. Among the most widely used analysis, we note methods based on Euclidian distances or on spike metric distance such as the one proposed by van Rossum. Methods based on artificial neural network and machine learning (such as self-organizing maps) have also gain popularity to recognize and/or classify specific input patterns. In this brief report, we first compare these three strategies using dataset from 3 different sensory systems. We show that the input-weighting procedure inherent to artificial neural network allows the extraction of the information most relevant to the dis...

Metric-space analysis of spike trains: theory, algorithms and application

Network: Computation in Neural Systems, 1997

We present the mathematical basis of a new approach to the analysis of temporal coding. The foundation of the approach is the construction of several families of novel distances (metrics) between neuronal impulse trains. In contrast to most previous approaches to the analysis of temporal coding, the present approach does not attempt to embed impulse trains in a vector space, and does not assume a Euclidean notion of distance. Rather, the proposed metrics formalize physiologically based hypotheses for those aspects of the firing pattern that might be stimulus dependent, and make essential use of the point-process nature of neural discharges. We show that these families of metrics endow the space of impulse trains with related but inequivalent topological structures. We demonstrate how these metrics can be used to determine whether a set of observed responses has a stimulus-dependent temporal structure without a vector-space embedding. We show how multidimensional scaling can be used to assess the similarity of these metrics to Euclidean distances. For two of these families of metrics (one based on spike times and one based on spike intervals), we present highly efficient computational algorithms for calculating the distances. We illustrate these ideas by application to artificial data sets and to recordings from auditory and visual cortex. †

A comparison of Euclidean metrics and their application in statistical inferences in the spike train space

Statistical analysis and inferences on spike trains are one of the central topics in neural coding. It is of great interest to understand the underlying distribution and geometric structure of given spike train data. However, a fundamental obstacle is that the space of all spike trains is not an Euclidean space, and non-Euclidean metrics have been commonly used in the literature to characterize the variability and pattern in neural observations. Over the past few years, two Euclidean-like metrics were independently developed to measure distance in the spike train space. An important benefit of these metrics is that the spike train space will be suitable for embedding in Euclidean spaces due to their Euclidean properties. In this paper, we systematically compare these two metrics on theory, properties, and applications. Because of its Euclidean properties, one of these metrics has been further used in defining summary statistics (i.e. mean and variance) and conducting statistical inferences in the spike train space. Here we provide equivalent definitions using the other metric and show that consistent statistical inferences can be conducted. We then apply both inference frameworks in a neural coding problem for a recording in geniculate ganglion stimulated by different tastes. It is found that both frameworks achieve desirable results and provide useful new tools in statistical inferences in neural spike train space.

Optimal neural spike classification

1988

Being able to record the electrical activities of a number of neurons simultaneously is likely to be important in the study of the functional organization of networks of real neurons. Using one extracellular microelectrode to record from several neurons is one approach to studying the response properties of sets of adjacent and therefore likely related neurons. However, to do this, it is necessary to correctly classify the signals generated by these different neurons. This paper considers this problem of classifying the signals in such an extracellular recording, based upon their shapes, and specifically considers the classification of signals in the case when spikes overlap temporally.

Spike Metrics

Analysis of Parallel Spike Trains, 2010

Important questions in neuroscience, such as how neural activity represents the sensory world, can be framed in terms of the extent to which spike trains differ from one another. Since spike trains can be considered to be sequences of stereotyped events, it is natural to focus on ways to quantify differences between event sequences, known as spike-train metrics. We begin by defining several families of these metrics, including metrics based on spike times, on interspike intervals, and on vector-space embedding. We show how these metrics can be applied to single-neuron and multineuronal data and then describe algorithms that calculate these metrics efficiently. Finally, we discuss analytical procedures based on these metrics, including methods for quantifying variability among spike trains, for constructing perceptual spaces, for calculating information-theoretic quantities, and for identifying candidate features of neural codes.

Dimensionality Reduction on Spatio-Temporal Maximum Entropy Models of Spiking Networks

Maximum entropy models (MEM) have been widely used in the last 10 years to characterize the statistics of networks of spiking neurons. A major drawback of this approach is that the number of parameters used in the statistical model increases very fast with the network size, hindering its interpretation and fast computation. Here, we present a novel framework of dimensionality reduction for generalized MEM handling spatio-temporal correlations. This formalism is based on information geometry where a MEM is a point on a large-dimensional manifold. We exploit the geometrical properties of this manifold in order to find a projection on a lower dimensional space that best captures the high-order statistics. This allows us to define a quantitative criterion that we call the “degree of compressibility” of the neuronal code. A powerful aspect of this method is that it does not require fitting the model. Indeed, the matrix defining the metric of the manifold is computed directly via the data...

Automatic classification of neural spike activity: an application of minimum distance classifiers

Cybernetics &Systems, 2003

Electrophysiological recordings of extracellular neuronal activity often produce complex patterns caused both by the simultaneous firing of many neurons in the proximity of the recording electrode and by the superimposition of biological and instrumental noise onto the neuronal signals. This pattern complexity requires a fast evaluation of the classification results by the experimenter in order to decide how to proceed with the experiment. Euclidean and Mahalanobis minimum distance classifier methods, used in this context, follow a similar approach to the classification problem. A procedure is described by which both methods are applied, tested, and compared using simulated spike populations. The same procedure can be followed when analyzing real spike recordings.