Feedback Control Tames Disorder in Attractor Neural Networks (original) (raw)
Related papers
Chaotic associative recalls for fixed point attractor patterns
2003
Human perception is a complex nonlinear dynamics. On the one hand it is periodic dynamics and on the other hand it is chaotic. Thus, we wish to propose a hybrid-the spatial chaotic dynamics for the associative recall to retrieve patterns, similar to Walter Freeman's discovery, and the fixed point dynamics for memory stage, similar to Hopfield and Grossberg's discoveries. In this model, each neuron in the network could be a chaotic map, whose phase space is divided into two states: one is periodic dynamic state with period-V, which is used to represent a V-value retrieved pattern; another is chaotic dynamic state. Firstly, patters are stored in the memory by fixed point learning algorithm. In the retrieving process, all neurons are initially set in the chaotic region. Due to the ergodicity property of chaos, each neuron will approximate the periodic points covered by the chaotic attractor at same instants. When this occurs, the control is activated to drive the dynamic of each neuron to their corresponding stable periodic point. Computer simulations confirm the theoretical prediction.
Continuous attractors for dynamic memories
eLife, 2021
Episodic memory has a dynamic nature: when we recall past episodes, we retrieve not only their content, but also their temporal structure. The phenomenon of replay, in the hippocampus of mammals, offers a remarkable example of this temporal dynamics. However, most quantitative models of memory treat memories as static configurations, neglecting the temporal unfolding of the retrieval process. Here, we introduce a continuous attractor network model with a memory-dependent asymmetric component in the synaptic connectivity, which spontaneously breaks the equilibrium of the memory configurations and produces dynamic retrieval. The detailed analysis of the model with analytical calculations and numerical simulations shows that it can robustly retrieve multiple dynamical memories, and that this feature is largely independent of the details of its implementation. By calculating the storage capacity, we show that the dynamic component does not impair memory capacity, and can even enhance it in certain regimes.
Attractor Memory with Self-organizing Input
Lecture Notes in Computer Science, 2006
We propose a neural network based autoassociative memory system for unsupervised learning. This system is intended to be an example of how a general information processing architecture, similar to that of neocortex, could be organized. The neural network has its units arranged into two separate groups called populations, one input and one hidden population. The units in the input population form receptive fields that sparsely projects onto the units of the hidden population. Competitive learning is used to train these forward projections. The hidden population implements an attractor memory. A back projection from the hidden to the input population is trained with a Hebbian learning rule. This system is capable of processing correlated and densely coded patterns, which regular attractor neural networks are very poor at. The system shows good performance on a number of typical attractor neural network tasks such as pattern completion, noise reduction, and prototype extraction.
Hopf Bifurcation in a Chaotic Associative Memory
Neurocomputing, 2015
This paper has two basic objectives: the first is to investigate Hopf Bifurcation in the internal state of a Chaotic Associative Memory (CAM). For a small network with three neurons, resulting in a six-dimensional Equation of State, the existence and stability of Hopf Bifurcation were verified analytically. The second objective is to study how the Hopf Bifurcation changed the external state (output) of CAM, since this network was trained to associate a dataset of input-output patterns. There were three main differences between this study and others: the bifurcation parameter was not a time delay, but a physical parameter of a CAM; the weights of interconnections between chaotic neurons were neither free parameters, nor chosen arbitrarily, but determined in the training process of classical AM; the Hopf Bifurcation occurred in the internal state of CAM, and not in the external state (input-output network signal). We present three examples of Hopf Bifurcation: one neuron with supercritical bifurcation while the other two neurons do not bifurcate; two neurons bifurcating into a subcritical bifurcation and one neuron does not bifurcate; and the same example as before, but with a supercritical bifurcation. We show that the presence of a limit cycle in the internal state of CAM prevents output signals from the network converging towards a desired equilibrium state (desired memory), although the CAM is able to access this memory.
Neural network modeling of associative memory: Beyond the Hopfield model
Physica A: Statistical Mechanics and its Applications, 1992
A number of neural network models, in which fixed-point and limit-cycle attractors of the underlying dynamics are used to store and associatively recall information, are described. In the first class of models, a hierarchical structure is used to store an exponentially large number of strongly correlated memories. The second class of models uses limit cycles to store and retrieve individual memories. A neurobiologically plausible network that generates low-amplitude periodic variations of activity, similar to the oscillations observed in electroencephalographic recordings, is also described. Results obtained from analytic and numerical studies of the properties of these networks are discussed.
PLoS ONE, 2012
We study the properties of the dynamical phase transition occurring in neural network models in which a competition between associative memory and sequential pattern recognition exists. This competition occurs through a weighted mixture of the symmetric and asymmetric parts of the synaptic matrix. Through a generating functional formalism, we determine the structure of the parameter space at non-zero temperature and near saturation (i.e., when the number of stored patterns scales with the size of the network), identifying the regions of high and weak pattern correlations, the spinglass solutions, and the order-disorder transition between these regions. This analysis reveals that, when associative memory is dominant, smooth transitions appear between high correlated regions and spurious states. In contrast when sequential pattern recognition is stronger than associative memory, the transitions are always discontinuous. Additionally, when the symmetric and asymmetric parts of the synaptic matrix are defined in terms of the same set of patterns, there is a discontinuous transition between associative memory and sequential pattern recognition. In contrast, when the symmetric and asymmetric parts of the synaptic matrix are defined in terms of independent sets of patterns, the network is able to perform both associative memory and sequential pattern recognition for a wide range of parameter values.
Response functions improving performance in analog attractor neural networks
Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics, 1994
In the context of attractor neural networks, we study how the equilibrium analog neural activities, reached by the network dynamics during memory retrieval, may improve storage performance by reducing the interferences between the recalled pattern and the other stored ones. We determine a simple dynamics that stabilizes network states which are highly correlated with the retrieved pattern, for a number of stored memories that does not exceed α ⋆ N , where α ⋆ ∈ [0, 0.41] depends on the global activity level in the network and N is the number of neurons.
Journal of the Physical Society of Japan, 2010
We investigated how the stability of macroscopic states in the associative memory model is affected by synaptic depression. To this model, we applied the dynamical mean-field theory, which has recently been developed in stochastic neural network models with synaptic depression. By introducing a sublattice method, we derived macroscopic equations for firing state variables and depression variables. By using the macroscopic equations, we obtained the phase diagram when the strength of synaptic depression and the correlation level among stored patterns were changed. We found that there is an unstable region in which both the memory state and mixed state cannot be stable and that various switching phenomena can occur in this region. 1
Attractor neural networks with activity-dependent synapses: The role of synaptic facilitation
Neurocomputing, 2007
We studied an autoassociative neural network with dynamic synapses which include a facilitating mechanism. We have developed a general mean-field framework to study the relevance of the different parameters defining the dynamics of the synapses and their influence on the collective properties of the network. Depending on these parameters, the network shows different types of behaviour including a retrieval phase, an oscillatory regime, and a non-retrieval phase. In the oscillatory phase, the network activity continously jumps between the stored patterns. Compared with other activity-dependent mechanisms such as synaptic depression, synaptic facilitation enhances the network ability to switch among the stored patterns and, therefore, its adaptation to external stimuli. A detailed analysis of our system reflects an efficient—more rapid and with lesser errors—network access to the stored information with stronger facilitation. We also present a set of Monte Carlo simulations confirming our analytical results.
Conditions for the emergence of spatially asymmetric retrieval states in an attractor neural network
Central European Journal of Physics, 2005
In this paper we show that during the retrieval process in a binary symmetric Hebb neural network, spatially localized states can be observed when the connectivity of the network is distance-dependent and a constraint on the activity of the network is imposed, which forces different levels of activity in the retrieval and learning states. This asymmetry in the activity during retrieval and learning is found to be a sufficient condition to observe spatially localized retrieval states. The result is confirmed analytically and by simulation.