Spikernels: Predicting Arm Movements by Embedding Population Spike Rate Patterns in Inner-Product Spaces (original) (raw)
Related papers
Spikernels: embedding spike neurons in inner-product spaces
Neural Information Processing Systems, 2003
Inner-product operators, often referred to as kernels in statistical learning, define a mapping from some,input space into a feature space. The focus of this paper is the construction of biologically-motivated kernels for cortical activities. The kernels we derive, termed Spikernels, map spike count sequences into an abstract vector space in which we can perform various prediction tasks. We discuss
Spikernels: Embedding spiking neurons in inner-product spaces
Advances in neural …, 2003
Inner-product operators, often referred to as kernels in statistical learning, define a map-ping from some input space into a feature space. The focus of this paper is the con-struction of biologically-motivated kernels for cortical activities. The kernels we derive, termed ...
Modified : January 23 2006 A Kernel-Based Approach to Predicting Arm Motion from MI Activity
Number: 86 Submitted By: Andrew Fagg Last Modified: January 23 2006 A Kernel-Based Approach to Predicting Arm Motion from MI Activity David Goldberg1, Andrew Fagg1, Nicho Hatsopoulos2, Gregory Ojakangas3, Lee Miller4 1School of Computer Science University of Oklahoma 2Department of Organismal Biology and Anatomy University of Chicago 3Department of Physics Dury University 4Department of Physiology Northwestern University Development of a brain-machine interface for control of a prosthetic arm requires an accurate and robust translation of the activation of a small subset of cells into a specification of motion for the prosthetic device. A common approach is to first construct a feature vector that describes the activation of some N cells over a history of M time bins. Next, given a set of observations of cell activity and actual arm movements, a linear model is computed over this feature set that predicts the subsequent motion of the arm (e.g., using a gradient descent or pseudo-inv...
Inner Products for Representation and Learning in the Spike Train Domain
Statistical Signal Processing for Neuroscience and Neurotechnology, 2010
In many neurophysiological studies and brain-inspired computation paradigms, there is still a need for new spike train analysis and learning algorithms because current methods tend to be limited in terms of the tools they provide and are not easily extended. This chapter presents a general framework to develop spike train machine learning methods by defining inner product operators for spike trains. They build on the mathematical theory of reproducing kernel Hilbert spaces (RKHS) and kernel methods, allowing a multitude of analysis and learning algorithms to be easily developed. The inner products utilize functional representations of spike trains, which we motivate from two perspectives: as a biological-modeling problem, and as a statistical description. The biological-modeling approach highlights the potential biological mechanisms taking place at the neuron level and that are quantified by the inner product. On the other hand, by interpreting the representation from a statistical perspective, one relates to other work in the literature. Moreover, the statistical description characterizes which information can be detected by the spike train inner product. The applications of the given inner products for development of machine learning methods are demonstrated in two problems, showing unsupervised and supervised learning.
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing, 2002
Linear and nonlinear (TDNN) models have been shown to estimate hand position using populations of action potentials collected in the pre-motor and motor cortical areas of a primate's brain. One of the applications of this discovery is to restore movement in patients suffering from paralysis. For real-time implementation of this technology, reliable and accurate signal processing models that produce small error variance in the estimated positions are required. In this paper, we compare the mapping performance of the FIR filter, Gamma filter and recurrent neural network (RNN) in the peaks of reaching movements. Each approach has strengths and weaknesses that are compared experimentally. The RNN approach shows very accurate peak position estimations with small error variance.
Kernel Methods on Spike Train Space for Neuroscience: A Tutorial
IEEE Signal Processing Magazine, 2000
Over the last decade several positive definite kernels have been proposed to treat spike trains as objects in Hilbert space. However, for the most part, such attempts still remain a mere curiosity for both computational neuroscientists and signal processing experts. This tutorial illustrates why kernel methods can, and have already started to, change the way spike trains are analyzed and processed. The presentation incorporates simple mathematical analogies and convincing practical examples in an attempt to show the yet unexplored potential of positive definite functions to quantify point processes. It also provides a detailed overview of the current state of the art and future challenges with the hope of engaging the readers in active participation.
Kernel-ARMA for Hand Tracking and Brain-Machine interfacing During 3D Motor Control
2008
Using machine learning algorithms to decode intended behavior from neural activity serves a dual purpose. First, these tools allow patients to interact with their environment through a Brain-Machine Interface (BMI). Second, analyzing the characteristics of such methods can reveal the relative significance of various features of neural activity, task stimuli, and behavior. In this study we adapted, implemented and tested a machine learning method called Kernel Auto-Regressive Moving Average (KARMA), for the task of inferring movements from neural activity in primary motor cortex. Our version of this algorithm is used in an online learning setting and is updated after a sequence of inferred movements is completed. We first used it to track real hand movements executed by a monkey in a standard 3D reaching task. We then applied it in a closed-loop BMI setting to infer intended movement, while the monkey's arms were comfortably restrained, thus performing the task using the BMI alon...
Real-time prediction of hand trajectory by ensembles of cortical neurons in primates
Nature, 2000
A neuron was considered selective to a stimulus group if: (1) the ®ring rate during stimulus presentation was different from the preceding baseline (Wilcoxon test, , 0.05), (2) an analysis of variance and pairwise comparisons (Wilcoxon test) addressing whether there were differences among the stimulus groups yielded P , 0.05 and (3) an ANOVA (parametric and non-parametric) comparing the variability to distinct stimuli within the selective category to the variability to repeated presentations of the same stimulus showed P.0.05. We observed neurons selective to faces, objects, spatial layouts and other stimuli. If the across-groups comparisons were not signi®cant but the activity was different from baseline, the neuron was de®ned as responsive but non-selective. To take into account any effects due to the different intervals we also compared the responses in a 600-ms window centred on the peak ®ring rate. The peak, latency and duration were estimated from the spike density function 15. For the selective neurons we computed the probability of error, P e , for classifying the stimulus as belonging to the preferred stimulus category or not 15,16. We did not observe any difference between the right and left hemisphere neurons.
A Novel Kernel for Learning a Neuron Model from Spike Train Data
From a functional viewpoint, a spiking neuron is a device that transforms input spike trains on its various synapses into an output spike train on its axon. We demonstrate in this paper that the function mapping underlying the device can be tractably learned based on input and output spike train data alone. We begin by posing the problem in a classification based framework. We then derive a novel kernel for an SRM 0 model that is based on PSP and AHP like functions. With the kernel we demonstrate how the learning problem can be posed as a Quadratic Program. Experimental results demonstrate the strength of our approach.