Spiking Neural Networks (original) (raw)
Related papers
Spontaneous Dynamics of Asymmetric Random Recurrent Spiking Neural Networks
Neural Computation, 2006
We study in this paper the effect of an unique initial stimulation on random recurrent networks of leaky integrate and fire neurons. Indeed given a stochastic connectivity this socalled spontaneous mode exhibits various non trivial dynamics. This study brings forward a mathematical formalism that allows us to examine the variability of the afterward dynamics according to the parameters of the weight distribution. Provided independence hypothesis (e.g. in the case of very large networks) we are able to compute the average number of neurons that fire at a given time -the spiking activity. In accordance with numerical simulations, we prove that this spiking activity reaches a steady-state, we characterize this steady-state and explore the transients.
Stochastic Dynamics of a Finite-Size Spiking Neural Network
Neural Computation, 2007
We present a simple Markov model of spiking neural dynamics that can be analytically solved to characterize the stochastic dynamics of a finite-size spiking neural network. We give closed-form estimates for the equilibrium distribution, mean rate, variance, and autocorrelation function of the network activity. The model is applicable to any network where the probability of firing of a neuron in the network depends on only the number of neurons that fired in a previous temporal epoch. Networks with statistically homogeneous connectivity and membrane and synaptic time constants that are not excessively long could satisfy these conditions. Our model completely accounts for the size of the network and correlations in the firing activity. It also allows us to examine how the network dynamics can deviate from mean field theory. We show that the model and solutions are applicable to spiking neural networks in biophysically plausible parameter regimes.
Neuronal avalanches in Watts-Strogatz networks of stochastic spiking neurons
Physical Review E, 2021
Networks of stochastic leaky integrate-and-fire neurons, both at the mean-field level and in square lattices, present a continuous absorbing phase transition with power-law neuronal avalanches at the critical point. Here we complement these results showing that small-world Watts-Strogatz networks have mean-field critical exponents for any rewiring probability p > 0. For the ring (p = 0), the exponents are the same from the dimension d = 1 of the directed-percolation class. In the model, firings are stochastic and occur in discrete time steps, based on a sigmoidal firing probability function. Each neuron has a membrane potential that integrates the signals received from its neighbors. The membrane potentials are subject to a leakage parameter. We study topologies with a varied number of neuron connections and different values of the leakage parameter. Results indicate that the dynamic range is larger for p = 0. We also study a homeostatic synaptic depression mechanism to selforganize the network towards the critical region. These stochastic oscillations are characteristic of the so-called self-organized quasicriticality.
Frontiers in Computational Neuroscience, 2014
Random networks of integrate-and-fire neurons with strong current-based synapses can, 3 unlike previously believed, assume stable states of sustained asynchronous and irregular firing, 4 even without external random background or pacemaker neurons. We analyze the mechanisms 5 underlying the emergence, lifetime and irregularity of such self-sustained activity states. We 6 first demonstrate how the competition between the mean and the variance of the synaptic 7 input leads to a non-monotonic firing-rate transfer in the network. Thus, by increasing the 8 synaptic coupling strength, the system can become bistable: In addition to the quiescent state, 9 a second stable fixed-point at moderate firing rates can emerge by a saddle-node bifurcation. 10 Inherently generated fluctuations of the population firing rate around this non-trivial fixed-point 11 can trigger transitions into the quiescent state. Hence, the trade-off between the magnitude of 12 the population-rate fluctuations and the size of the basin of attraction of the nontrivial rate fixed-13 point determines the onset and the lifetime of self-sustained activity states. During self-sustained 14 activity, individual neuronal activity is moreover highly irregular, switching between long periods 15 of low firing rate to short burst-like states. We show that this is an effect of the strong synaptic 16 weights and the finite time constant of synaptic and neuronal integration, and can actually serve 17 to stabilize the self-sustained state. 18 1 Kriener et al.
On the sensitive dependence on initial conditions of the dynamics of networks of spiking neurons
Journal of Computational Neuroscience, 2006
We have previously formulated an abstract dynamical system for networks of spiking neurons and derived a formal result that identifies the criterion for its dynamics, without inputs, to be "sensitive to initial conditions". Since formal results are applicable only to the extent to which their assumptions are valid, we begin this article by demonstrating that the assumptions are indeed reasonable for a wide range of networks, particularly those that lack overarching structure. A notable aspect of the criterion is the finding that sensitivity does not necessarily arise from randomness of connectivity or of connection strengths, in networks. The criterion guides us to cases that decouple these aspects: we present two instructive examples of networks, one with random connectivity and connection strengths, yet whose dynamics is insensitive, and another with structured connectivity and connection strengths, yet whose dynamics is sensitive. We then argue based on the criterion and the gross electrophysiology of the cortex that the dynamics of cortical networks ought to be almost surely sensitive under conditions typically found there. We supplement this with two examples of networks modeling cortical columns with widely differing qualitative dynamics, yet with both exhibiting sensitive dependence. Next, we use the criterion to construct a network that undergoes bifurcation from sensitive dynamics to insensitive dynamics when the value of a control parameter is varied. Finally, we extend the formal result
Avalanches in a Stochastic Model of Spiking Neurons
PLoS Computational Biology, 2010
Neuronal avalanches are a form of spontaneous activity widely observed in cortical slices and other types of nervous tissue, both in vivo and in vitro. They are characterized by irregular, isolated population bursts when many neurons fire together, where the number of spikes per burst obeys a power law distribution. We simulate, using the Gillespie algorithm, a model of neuronal avalanches based on stochastic single neurons. The network consists of excitatory and inhibitory neurons, first with all-to-all connectivity and later with random sparse connectivity. Analyzing our model using the system size expansion, we show that the model obeys the standard Wilson-Cowan equations for large network sizes (w10 5 neurons). When excitation and inhibition are closely balanced, networks of thousands of neurons exhibit irregular synchronous activity, including the characteristic power law distribution of avalanche size. We show that these avalanches are due to the balanced network having weakly stable functionally feedforward dynamics, which amplifies some small fluctuations into the large population bursts. Balanced networks are thought to underlie a variety of observed network behaviours and have useful computational properties, such as responding quickly to changes in input. Thus, the appearance of avalanches in such functionally feedforward networks indicates that avalanches may be a simple consequence of a widely present network structure, when neuron dynamics are noisy. An important implication is that a network need not be ''critical'' for the production of avalanches, so experimentally observed power laws in burst size may be a signature of noisy functionally feedforward structure rather than of, for example, self-organized criticality.
Stochastic population dynamics of spiking neurons
2003
We will review in this chapter some developments in the use of the theory of stochastic processes and nonlinear dynamics in the study of large scale dynamical models of interacting spiking neurons. Without aiming at a full coverage of the subject, we will review ...
Population dynamics of interacting spiking neurons
Physical Review E, 2002
A dynamical equation is derived for the spike emission rate (t) of a homogeneous network of integrateand-fire ͑IF͒ neurons in a mean-field theoretical framework, where the activity of the single cell depends both on the mean afferent current ͑the ''field''͒ and on its fluctuations. Finite-size effects are taken into account, by a stochastic extension of the dynamical equation for the ; their effect on the collective activity is studied in detail. Conditions for the local stability of the collective activity are shown to be naturally and simply expressed in terms of ͑the slope of͒ the single neuron, static, current-to-rate transfer function. In the framework of the local analysis, we studied the spectral properties of the time-dependent collective activity of the finite network in an asynchronous state; finite-size fluctuations act as an ongoing self-stimulation, which probes the spectral structure of the system on a wide frequency range. The power spectrum of exhibits modes ranging from very high frequency ͑depending on spike transmission delays͒, which are responsible for instability, to oscillations at a few Hz, direct expression of the diffusion process describing the population dynamics. The latter ''diffusion'' slow modes do not contribute to the stability conditions. Their characteristic times govern the transient response of the network; these reaction times also exhibit a simple dependence on the slope of the neuron transfer function. We speculate on the possible relevance of our results for the change in the characteristic response time of a neural population during the learning process which shapes the synaptic couplings, thereby affecting the slope of the transfer function. There is remarkable agreement of the theoretical predictions with simulations of a network of IF neurons with a constant leakage term for the membrane potential.