Flow using a network of spiking neurons (WIRN 2005) (original) (raw)
A system for transmitting a coherent burst of activity through a network of spiking neurons
2006
Abstract. In this paper we examine issues involving the transmission of information by spike trains through networks made of real time asynchronous spiking neurons. For our convenience we use a spiking model that is has an intrinsic delay between an input and output spike. We look at issues involving transmission of a desired average level of stable spiking activity over many layers, and show how feed-back reset inhibition can achieve this aim.
Indisputable facts when implementing spiking neuron networks
2009
In this article, our wish is to demystify some aspects of coding with spike-timing, through a simple review of well-understood technical facts regarding spike coding. The goal is to help better understanding to which extend computing and modelling with spiking neuron networks can be biologically plausible and computationally efficient. We intentionally restrict ourselves to a deterministic dynamics, in this review,
Dynamic Control of Synchronous Activity in Networks of Spiking Neurons
PLOS ONE, 2016
Oscillatory brain activity is believed to play a central role in neural coding. Accumulating evidence shows that features of these oscillations are highly dynamic: power, frequency and phase fluctuate alongside changes in behavior and task demands. The role and mechanism supporting this variability is however poorly understood. We here analyze a network of recurrently connected spiking neurons with time delay displaying stable synchronous dynamics. Using mean-field and stability analyses, we investigate the influence of dynamic inputs on the frequency of firing rate oscillations. We show that afferent noise, mimicking inputs to the neurons, causes smoothing of the system's response function, displacing equilibria and altering the stability of oscillatory states. Our analysis further shows that these noise-induced changes cause a shift of the peak frequency of synchronous oscillations that scales with input intensity, leading the network towards critical states. We lastly discuss the extension of these principles to periodic stimulation, in which externally applied driving signals can trigger analogous phenomena. Our results reveal one possible mechanism involved in shaping oscillatory activity in the brain and associated control principles.
Coding of Temporally Varying Signals in Networks of Spiking Neurons with Global Delayed Feedback
Neural Computation, 2005
Oscillatory and synchronized neural activities are commonly found in the brain, and evidence suggests that many of them are caused by global feedback. Their mechanisms and roles in information processing have been discussed often using purely feedforward networks or recurrent networks with constant inputs. On the other hand, real recurrent neural networks are abundant and continually receive information-rich inputs from the outside environment or other parts of the brain. We examine how feedforward networks of spiking neurons with delayed global feedback process information about temporally changing inputs. We show that the network behavior is more synchronous as well as more correlated with and phase-locked to the stimulus when the stimulus frequency is resonant with the inherent frequency of the neuron or that of the network oscillation generated by the feedback architecture. The two eigenmodes have distinct dynamical characteristics, which are supported by numerical simulations a...
A spiking neural algorithm for the Network Flow problem
ArXiv, 2019
It is currently not clear what the potential is of neuromorphic hardware beyond machine learning and neuroscience. In this project, a problem is investigated that is inherently difficult to fully implement in neuromorphic hardware by introducing a new machine model in which a conventional Turing machine and neuromorphic oracle work together to solve such types of problems. We show that the P-complete Max Network Flow problem is intractable in models where the oracle may be consulted only once (`create-and-run' model) but becomes tractable using an interactive (`neuromorphic co-processor') model of computation. More in specific we show that a logspace-constrained Turing machine with access to an interactive neuromorphic oracle with linear space, time, and energy constraints can solve Max Network Flow. A modified variant of this algorithm is implemented on the Intel Loihi chip; a neuromorphic manycore processor developed by Intel Labs. We show that by off-loading the search fo...
Decoding spikes in a spiking neuronal network
Journal of Physics A: Mathematical and General, 2004
We investigate how to reliably decode the input information from the output of a spiking neuronal network. A maximum likelihood estimator of the input signal, together with its Fisher information, is rigorously calculated. The advantage of the maximum likelihood estimation over the 'brute-force rate coding' estimate is clearly demonstrated. It is pointed out that the ergodic assumption in neuroscience, i.e. a temporal average is equivalent to an ensemble average, is in general not true. Averaging over an ensemble of neurons usually gives a biased estimate of the input information. A method on how to compensate for the bias is proposed. Reconstruction of dynamical input signals with a group of spiking neurons is extensively studied and our results show that less than a spike is sufficient to accurately decode dynamical inputs.
2004
We study in this paper the effect of an unique initial stimulation on random recurrent networks of leaky integrate and fire neurons. Indeed given a stochastic connectivity this socalled spontaneous mode exhibits various non trivial dynamics. This study brings forward a mathematical formalism that allows us to examine the variability of the afterward dynamics according to the parameters of the weight distribution. Provided independence hypothesis (e.g. in the case of very large networks) we are able to compute the average number of neurons that fire at a given time – the spiking activity. In accordance with numerical simulations, we prove that this spiking activity reaches a steady-state, we characterize thissteady-state and explore the transients. 1
A 1st step towards an abstract view of computation in spiking neural-networks
2006
Neural network information is mainly conveyed through (i) event-based quanta, spikes, whereas highlevel representation of the related processing is almost always modeled in (ii) some continuous framework. Here, we propose a link between (i) and (ii), so that we can derive the spiking network parameters given a continuous processing and also obtain an abstract interpretation of the related processing.
Fast Temporal Encoding and Decoding with Spiking Neurons
Neural Computation, 1998
We propose a simple theoretical structure of interacting integrate-and-fire neurons that can handle fast information processing and may account for the fact that only a few neuronal spikes suffice to transmit information in the brain. Using integrate-and-fire neurons that are subjected to individual noise and to a common external input, we calculate their first passage time (FPT), or interspike interval. We suggest using a population average for evaluating the FPT that represents the desired information. Instantaneous lateral excitation among these neurons helps the analysis. By employing a second layer of neurons with variable connections to the first layer, we represent the strength of the input by the number of output neurons that fire, thus decoding the temporal information. Such a model can easily lead to a logarithmic relation as in Weber's law. The latter follows naturally from information maximization if the input strength is statistically distributed according to an app...
A distribution of spike transmission delays affects the stability of interacting spiking neurons
We summarize the approach leading to the dynamical mean-field equation for the spike emission rate ν(t) of an interacting population of Integrate-and-Fire (IF) neurons derived in . Building on the results concerning the stability conditions and finite-size effects, we investigate how such properties are affected by a non-trivial distribution of spike transmission delays. The main findings are: i) the stability of the collective asynchronous states is improved by widening the distribution of delays; ii) high-frequency components of the power spectrum of the collective activity are damped; iii) Details of the stability and spectral properties are strongly affected by the shape of the delay distribution. We present quantitative predictions from the theory and we demonstrate a very good agreement with simulations. *
Overview of facts and issues about neural coding by spikes
2009
In the present overview, our wish is to demystify some aspects of coding with spike-timing, through a simple review of well-understood technical facts regarding spike coding. Our goal is a better understanding of the extent to which computing and modeling with spiking neuron networks might be biologically plausible and computationally efficient. We intentionally restrict ourselves to a deterministic imp lementation
Efficient Transmission of Subthreshold Signals in Complex Networks of Spiking Neurons
PLOS ONE, 2015
We investigate the efficient transmission and processing of weak signals (subthreshold) in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances -that naturally balances the network with excitatory and inhibitory synapsesand considering short-term synaptic plasticity affecting such conductances, we found different dynamical phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron and increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases and therefore for quite well defined and different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust in different situations, including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies including diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios including the case of spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it also could appear in actual neural systems as recent psychophysical experiments suggest. If so, our study will confirm that monitoring different levels of stochasticity in the system at which efficient processing of information occurs can be a very useful tool for neuroscientists to investigate the existence of phase transitions in actual neural systems, including the brain.
In this article is presented a very simple and effective analog spiking neural network simulator, realized with an event-driven method, taking into account a basic biological neuron parameter: the spike latency. Also, other fundamentals biological parameters are considered, such as subthreshold decay and refractory period. This model allows to synthesize neural groups able to carry out some substantial functions. The proposed simulator is applied to elementary structures, in which some properties and interesting applications are discussed, such as the realization of a Spiking Neural Network Classifier.
Constructing Precisely Computing Networks with Biophysical Spiking Neurons
The Journal of neuroscience : the official journal of the Society for Neuroscience, 2015
While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks,...
Analysis of spatio-temporal patterns in associative networks of spiking neurons
9th International Conference on Artificial Neural Networks: ICANN '99, 1999
A neural network is presented that stores spatio-temporal patterns (syn rechains) in associative networks of spiking neurons and replays them at a controlable speed. An implicit equation is derived and solved numerically which relates the average speed to the network parameters. The replay speed can be controled by unspeci c background signals and also depends on the number of co-activated syn re-chains. Balanced inhibition can prevent the latter dependency. Simulation results con rm the theory, but reveal instabilities for low and high control signals. These boundaries are traced back to four di erent destabilizing mechanisms.
Accurate Latency Characterization for Very Large Asynchronous Spiking Neural Networks
Proceedings of the International Conference on Bioinformatics Models, Methods and Algorithms, 2011
The simulation problem of very large fully asynchronous Spiking Neural Networks is considered in this paper. To this purpose, a preliminary accurate analysis of the latency time is made, applying classical modelling methods to single neurons. The latency characterization is then used to propose a simplified model, able to simulate large neural networks. On this basis, networks, with up to 100,000 neurons for more than 100,000 spikes, can be simulated in a quite short time with a simple MATLAB program. Plasticity algorithms are also applied to emulate interesting global effects as the Neuronal Group Selection.
Journal of Neuroscience, 2008
Isolated feedforward networks (FFNs) of spiking neurons have been studied extensively for their ability to propagate transient synchrony and asynchronous firing rates, in the presence of activity independent synaptic background noise (Diesmann et al., 1999; van Rossum et al., 2002). In a biologically realistic scenario, however, the FFN should be embedded in a recurrent network, such that the activity in the FFN and the network activity may dynamically interact. Previously, transient synchrony propagating in an FFN was found to destabilize the dynamics of the embedding network (Mehring et al., 2003).
Stationary Bumps in Networks of Spiking Neurons
Neural Computation, 2001
We examine the existence and stability of spatially localized "bumps" of neuronal activity in a network of spiking neurons. Bumps have been proposed in mechanisms of visual orientation tuning, the rat head direction system, and working memory. We show that a bump solution can exist in a spiking network provided the neurons re asynchronously within the bump. We consider a parameter regime where the bump solution is bistable with an all-off state and can be initiated with a transient excitatory stimulus. We show that the activity pro le matches that of a corresponding population rate model. The bump in a spiking network can lose stability through partial synchronization to either a traveling wave or the all-off state. This can occur if the synaptic timescale is too fast through a dynamical effect or if a transient excitatory pulse is applied to the network. A bump can thus be activated and deactivated with excitatory inputs that may have physiological relevance.
Information Measures in a Small Network of Spiking Neurons
2007
The information transmission among the elements of a small network of spiking neurons is studied using the normalized differential entropy and the Kullback Leibler distance information measures. The attention is devoted to the information content of the spiking activity of a reference neuron subject to excitatory and inhibitory stimuli coming from other elements of the considered network. The use of information measures allows to quantify the effects of the input contributions. The role of inhibition in the spiking activity of the reference neuron is enlighted and the effect of considering different distributions for the time events of the stimuli is discussed. 1 Introduction The classical characterization of the input-output properties of neurons, as well as of the neuronal models, makes use of the so called frequency transfer functions. Several definitions exist for these functions but they all share the feature of being plots of the output frequency of firing against the strength...
Memory trace in spiking neural networks
Spiking neural networks have a limited memory capacity, such that a stimulus arriving at time t would vanish over a timescale of 200-300 milliseconds . Therefore, only neural computations that require history dependencies within this short range can be accomplished. In this paper, the limited memory capacity of a spiking neural network is extended by coupling it to an delayed-dynamical system. This presents the possibility of information exchange between spiking neurons and continuous delayed systems.