Transient responses in dynamical neural models (original) (raw)

Neural networks with transient state dynamics

New Journal of Physics, 2007

kind of dynamical behavior in a controllable fashion and in a manner applicable to a variety of starting systems. Viz we are interested in neural networks which generate transient states dynamics in terms of a meaningful time series of states approaching arbitrarily close predefined attractor ruins.

Transient oscillations in continuous-time excitatory ring neural networks with delay

Physical Review E, 1997

A ring neural network is a closed chain in which each unit is connected unidirectionally to the next one. Numerical investigations indicate that continuous-time excitatory ring networks composed of graded-response units can generate oscillations when interunit transmission is delayed. These oscillations appear for a wide range of initial conditions. The mechanisms underlying the generation of such patterns of activity are studied. The analysis of the asymptotic behavior of the system shows that ͑i͒ trajectories of most initial conditions tend to stable equilibria, ͑ii͒ undamped oscillations are unstable, and can only exist in a narrow region forming the boundary between the basins of attraction of the stable equilibria. Therefore the analysis of the asymptotic behavior of the system is not sufficient to explain the oscillations observed numerically when interunit transmission is delayed. This analysis corroborates the hypothesis that the oscillations are transient. In fact, it is shown that the transient behavior of the system with delay follows that of the corresponding discrete-time excitatory ring network. The latter displays infinitely many nonconstant periodic oscillations that transiently attract the trajectories of the network with delay, leading to long-lasting transient oscillations. The duration of these oscillations increases exponentially with the inverse of the characteristic charge-discharge time of the neurons, indicating that they can outlast observation windows in numerical investigations. Therefore, for practical applications, these transients cannot be distinguished from stationary oscillations. It is argued that understanding the transient behavior of neural network models is an important complement to the analysis of their asymptotic behavior, since both living nervous systems and artificial neural networks may operate in changing environments where long-lasting transients are functionally indistinguishable from asymptotic regimes.

From neuron to neural networks dynamics

The European Physical Journal Special Topics, 2007

This paper presents an overview of some techniques and concepts coming from dynamical system theory and used for the analysis of dynamical neural networks models. In a first section, we describe the dynamics of the neuron, starting from the Hodgkin-Huxley description, which is somehow the canonical description for the "biological neuron". We discuss some models reducing the Hodgkin-Huxley model to a two dimensional dynamical system, keeping one of the main feature of the neuron: its excitability. We present then examples of phase diagram and bifurcation analysis for the Hodgin-Huxley equations. Finally, we end this section by a dynamical system analysis for the nervous flux propagation along the axon. We then consider neuron couplings, with a brief description of synapses, synaptic plasticiy and learning, in a second section. We also briefly discuss the delicate issue of causal action from one neuron to another when complex feedback effects and non linear dynamics are involved. The third section presents the limit of weak coupling and the use of normal forms technics to handle this situation. We consider then several examples of recurrent models with different type of synaptic interactions (symmetric, cooperative, random). We introduce various techniques coming from statistical physics and dynamical systems theory. A last section is devoted to a detailed example of recurrent model where we go in deep in the analysis of the dynamics and discuss the effect of learning on the neuron dynamics. We also present recent methods allowing the analysis of the non linear effects of the neural dynamics on signal propagation and causal action. An appendix, presenting the main notions of dynamical systems theory useful for the comprehension of the chapter, has been added for the convenience of the reader.

Nonlinear dynamics in a model neuron provide a novel mechanism for transient synaptic inputs to produce long-term alterations of postsynaptic activity

Journal of neurophysiology, 1993

1. A mathematical model of a bursting molluscan neuron has been found to possess multiple modes of electrical activity, such as periodic beating (tonic firing), periodic bursting (bursts of action potentials separated by quiescent periods), and potentially chaotic bursting, all at a single set of parameters. The multiple modes correspond to multiple stable attractors, whose existence is an emergent property of the nonlinear dynamics of the system. 2. Transient synaptic inputs can switch the activity of the neuron between different modes. These mode transitions, which do not require any changes in the biochemical or biophysical parameters of the neuron, provide an enduring response to a transient input, as well as a mechanism for phasic sensitivity (i.e., temporal specificity). 3. These results provide new insights into the role of nonlinear dynamics in information processing and storage at the level of the single neuron.

14 Investigation of Input-Output Gain in Dynamical Systems for Neural Information Processing

2000

The processing of sensory signals in the human cortex is currently the subject of numerous studies, both at an experimental and a theoretical level. These studies investigate the principles of interactions between pairs of cortical areas. Different theoretical models have been derived from the experimental results and then pro- posed, in order to describe the processing of stimuli in V1,

Emerging phenomena in neural networks with dynamic synapses and their computational implications

Frontiers in Computational Neuroscience, 2013

In this paper we review our research on the effect and computational role of dynamical synapses on feed-forward and recurrent neural networks. Among others, we report on the appearance of a new class of dynamical memories which result from the destabilisation of learned memory attractors. This has important consequences for dynamic information processing allowing the system to sequentially access the information stored in the memories under changing stimuli. Although storage capacity of stable memories also decreases, our study demonstrated the positive effect of synaptic facilitation to recover maximum storage capacity and to enlarge the capacity of the system for memory recall in noisy conditions. Moreover, the dynamical phase described above can be associated to the voltage transitions between up and down states observed in cortical areas in the brain. We studied then the conditions in which the permanence times in the up state are power-law distributed, which is a sign for criticality, and concluded that the experimentally observed large variability of permanence times could be explained as the result of noisy dynamic synapses with large recovery times. Finally, we report also recent results concerning how short-term synaptic processes can transmit weak signals throughout more than one frequency range in noisy neural networks by kind of stochastic multi-resonance. This is consequence of the competition between changes in the transmitted signals as neurons were varying their firing threshold and adaptive noise due to activity-dependent fluctuations in the synapses.

Nonlinear Dynamics and Symbolic Dynamics of Neural Networks

Neural Computation, 1992

A piecewise linear equation is proposed as a method of analysis of mathematical models of neural networks. A symbolic representation of the dynamics in this equation is given as a directed graph on an N-dimensional hypercube. This provides a formal link with discrete neural networks such as the original Hopfield models. Analytic criteria are given to establish steady states and limit cycle oscillations independent of network dimension. Model networks that display multiple stable limit cycles and chaotic dynamics are discussed. The results show that such equations are a useful and efficient method of investigating the behavior of neural networks.

Neural Systems as Nonlinear Filters

Neural Computation, 2000

Experimental data show that biological synapses behave quite differently from the symbolic synapses in all common artificial neural network models. Biological synapses are dynamic, i.e., their "weight" changes on a short time scale by several hundred percent in dependence of the past input to the synapse. In this article we address the question how this inherent synaptic dynamics -which should not be confused with long term "learning" -affects the computational power of a neural network. In particular we analyze computations on temporal and spatio-temporal patterns, and we give a complete mathematical characterization of all filters that can be approximated by feedforward neural networks with dynamic synapses. It turns out that even with just a single hidden layer such networks can approximate a very rich class of nonlinear filters: all filters that can be characterized by Volterra series. This result is robust with regard to various changes in the model for synaptic dynamics. Our characterization result provides for all nonlinear filters that are approximable by Volterra series a new complexity hierarchy which is related to the cost of implementing such filters in neural systems. * Wolfgang Maass would like to thank the Sloan Foundation (USA), the Fonds zur Förderung der wissenschaftlichen Forschung (FWF) , Austria, project P12153, and the NeuroCOLT project of the EC for partial support, and the Computational Neurobiology Lab at the Salk-Institute for its hospitality.