John Hertz - Academia.edu (original) (raw)
Papers by John Hertz
Neural Computation, Feb 1, 2010
Neuronal firing correlations are studied using simulations of a simple network model for a cortic... more Neuronal firing correlations are studied using simulations of a simple network model for a cortical column in a high-conductance state with dynamically balanced excitation and inhibition. Although correlations between individual pairs of neurons exhibit considerable heterogeneity, population averages show systematic behavior. When the network is in a stationary state, the average correlations are generically small: correlation coefficients are of order 1/N, where N is the number of neurons in the network. However, when the input to the network varies strongly in time, much larger values are found. In this situation, the network is out of balance, and the synaptic conductance is low, at times when the strongest firing occurs. However, examination of the correlation functions of synaptic currents reveals that after these bursts, balance is restored within a few milliseconds by a rapid increase in inhibitory synaptic conductance. These findings suggest an extension of the notion of the balanced state to include balanced fluctuations of synaptic currents, with a characteristic timescale of a few milliseconds.
The international journal of supercomputer applications, Dec 1, 1988
Neural computation is a style of computation that draws inspiration from the way the brain comput... more Neural computation is a style of computation that draws inspiration from the way the brain computes. It is an in trinsically collective paradigm characterized by high con nectivity among a very large number of simple pro cessors running in parallel, possibly asynchronously. Methods developed in the theory of many-particle systems can be brought to bear on important conceptual questions about the operation and programming of such computational assemblies. This paper reviews several basic problems that arise in this area: the mathematical formulation of the collective computation done by such a network and of algorithms for programming ("teaching") them. The importance of phase transitions for understanding the generic behavior of such systems and algorithms is emphasized.
Neurocomputing, Jun 1, 1999
BMC Neuroscience, Jul 1, 2010
Physical Review E, Sep 14, 2004
We present a dynamical description and analysis of non-equilibrium transitions in the noisy onedi... more We present a dynamical description and analysis of non-equilibrium transitions in the noisy onedimensional Ginzburg-Landau equation for an extensive system based on a weak noise canonical phase space formulation of the Freidlin-Wentzel or Martin-Siggia-Rose methods. We derive propagating nonlinear domain wall or soliton solutions of the resulting canonical field equations with superimposed diffusive modes. The transition pathways are characterized by the nucleations and subsequent propagation of domain walls. We discuss the general switching scenario in terms of a dilute gas of propagating domain walls and evaluate the Arrhenius factor in terms of the associated action. We find excellent agreement with recent numerical optimization studies.
Journal of Statistical Mechanics: Theory and Experiment, Mar 12, 2013
Neurons subject to a common non-stationary input may exhibit a correlated firing behavior. Correl... more Neurons subject to a common non-stationary input may exhibit a correlated firing behavior. Correlations in the statistics of neural spike trains also arise as the effect of interaction between neurons. Here we show that these two situations can be distinguished, with machine learning techniques, provided the data are rich enough. In order to do this, we study the problem of inferring a kinetic Ising model, stationary or nonstationary, from the available data. We apply the inference procedure to two data sets: one from salamander retinal ganglion cells and the other from a realistic computational cortical network model. We show that many aspects of the concerted activity of the salamander retinal neurons can be traced simply to the external input. A model of non-interacting neurons subject to a non-stationary external field outperforms a model with stationary input with couplings between neurons, even accounting for the differences in the number of model parameters. When couplings are added to the non-stationary model, for the retinal data, little is gained: the inferred couplings are generally not significant. Likewise, the distribution of the sizes of sets of neurons that spike simultaneously and the frequency of spike patterns as function of their rank (Zipf plots) are well-explained by an independent-neuron model with time-dependent external input, and adding connections to such a model does not offer significant improvement. For the cortical model data, robust couplings, well correlated with the real connections, can be inferred using the non-stationary model. Adding connections to this model slightly improves the agreement with the data for the probability of synchronous spikes but hardly affects the Zipf plot.
Physical Review E, Apr 7, 2015
American Journal of Physics, Jul 1, 1994
We can learn something about coding in large populations of neurons from models of the spike patt... more We can learn something about coding in large populations of neurons from models of the spike pattern distributions constructed from data. In our work, we do this for data generated from computational models of local cortical networks. This permits us to explore how features of the neuronal and synaptic properties of the network are related to those of the spike pattern distribution model. We employ the approach of Schneidman et al [1] and model this distribution by a Sherrington-Kirkpatrick (SK) model: P[S] = Z-1exp(½ΣijJijSiSj+ΣihiSi). In the work reported here, we analyze spike records from a simple model of a cortical column in a high-conductance state for two different cases: one with stationary tonic firing and the other with a rapidly time-varying input that produces rapid variations in firing rates. The average cross-correlation coefficient in the former is an order of magnitude smaller than that in the latter.To estimate the parameters Jij and hi we use a technique [2] based on inversion of the Thouless-Anderson-Palmer equations from spin glass theory. We have performed these fits for groups of neurons of sizes from 12 to 200 for tonic firing and from 6 to 800 for the case of the rapidly time-varying “stimulus”. The first two figures show that the distributions of Jij’s in the two cases are quite similar, both growing slightly narrower with increasing N. They are also qualitatively similar to those found by Schneidman et al and by Tkacik et al [3] for data from retinal networks. As in their work, it does not appear to be necessary to include higher order couplings. The means, which are much smaller than the standard deviations, also decrease with N, and the one for tonic firing is less than half that for the stimulus-driven network.However, the models obtained never appear to be in a spin glass phase for any of the sizes studied, in contrast to the finding of Tkacik et al, who reported spin glass behaviour at N=120. This is shown in the third figure panel. The x axis is 1/J, where J = N1/2std(Jij) and the y axis is H/J, where H is the total “field” N-1Σi(hi+ΣjJij‹Sj›). The green curve marks the Almeida-Thouless line separating the normal and spin glass phases in this parameter plane. All our data, for N ≤800 (the number of excitatory neurons in the originally-simulated network), lie in the normal region, and extrapolation from our results predicts spin glass behaviour only for N>5000.[1] E. Schneidman et al., Nature 440 1007-1012 (2006)[2] T. Tanaka, Phys Rev E 58 2302-2310 (1998); H. J. Kappen and F. B Rodriguez, Neural Comp 10 1137-1156 (1998)[3] G. Tkacik et al., arXiv:q-bio.NC/0611072 v1 (2006)
arXiv (Cornell University), Feb 18, 2000
We present a model of an olfactory system that performs odour segmentation. Based on the anatomy ... more We present a model of an olfactory system that performs odour segmentation. Based on the anatomy and physiology of natural olfactory systems, it consists of a pair of coupled modules, bulb and cortex. The bulb encodes the odour inputs as oscillating patterns. The cortex functions as an associative memory: when the input from the bulb matches a pattern stored in the connections between its units, the cortical units resonate in an oscillatory pattern characteristic of that odour. Further circuitry transforms this oscillatory signal to a slowly varying feedback to the bulb. This feedback implements olfactory segmentation by suppressing the bulbar response to the pre-existing odour, thereby allowing subsequent odours to be singled out for recognition.
Physical review, Oct 25, 2001
We study the Langevin dynamics of the standard random heteropolymer model by mapping the problem ... more We study the Langevin dynamics of the standard random heteropolymer model by mapping the problem to a supersymmetric field theory using the Martin-Siggia-Rose formalism. The resulting model is solved non-perturbatively employing a Gaussian variational approach. In constructing the solution, we assume that the chain is very long and impose the translational invariance which is expected to be present in the bulk of the globule by averaging over the center the of mass coordinate. In this way we derive equations of motion for the correlation and response functions C(t, t ′ ) and R(t, t ′ ). The order parameters are extracted from the asymptotic behavior of these functions. We find a dynamical phase diagram with frozen (glassy) and melted (ergodic) phases. In the glassy phase the system fails to reach equilibrium and exhibits aging of the type found in p-spin glasses. Within the approximations used in this study, the random heteropolymer model can be mapped to the problem of a manifold in a random potential with power law correlations.
arXiv (Cornell University), Mar 25, 2004
We present a complete mean field theory for a balanced state of a simple model of an orientation ... more We present a complete mean field theory for a balanced state of a simple model of an orientation hypercolumn. The theory is complemented by a description of a numerical procedure for solving the mean-field equations quantitatively. With our treatment, we can determine self-consistently both the firing rates and the firing correlations, without being restricted to specific neuron models. Here, we solve the analytically derived mean-field equations numerically for integrate-and-fire neurons. Several known key properties of orientation selective cortical neurons emerge naturally from the description: Irregular firing with statistics close to -but not restricted to -Poisson statistics; an almost linear gain function (firing frequency as a function of stimulus contrast) of the neurons within the network; and a contrast-invariant tuning width of the neuronal firing. We find that the irregularity in firing depends sensitively on synaptic strengths. If Fano factors are bigger than 1, then they are so for all stimulus orientations that elicit firing. We also find that the tuning of the noise in the input current is the same as the tuning of the external input, while that for the mean input current depends on both the external input and the intracortical connectivity.
Neural Information Processing Systems, Nov 30, 1992
We have trained networks of E -II units with short-range connections to simulate simple cellular ... more We have trained networks of E -II units with short-range connections to simulate simple cellular automata that exhibit complex or chaotic behaviour. Three levels of learning are possible (in decreasing order of difficulty): learning the underlying automaton rule, learning asymptotic dynamical behaviour, and learning to extrapolate the training history. The levels of learning achieved with and without weight sharing for different automata provide new insight into their dynamics.
l With a single neuron, it is not too hard to see how to adjust the weights based upon the error ... more l With a single neuron, it is not too hard to see how to adjust the weights based upon the error values. We've already seen a couple of ways. l With a multi-layer network, it is less obvious. For one thing, what is the "error" for the neurons in nonfinal layers? Without these, we don't know how to adjust. l This is called the "credit assignment" problem (maybe should be "blame assignment"). l Werbos, in his Harvard PhD thesis in 1974 found a method. l Rumelhart and McClelland, in 1985, discovered the method, presumably independently, and popularized it under the current name. l In mathematics, such methods are in the category of "optimization". The technique is gradient descent, as explained for Adalines. l However, the computation of the gradient is less clear. l With a single neuron, it is not too hard to see how to adjust the weights based upon the error values. We've already seen a couple of ways. l With a multi-layer network, it is less obvious. For one thing, what is the "error" for the neurons in non-final layers? Without these, we don't know how to adjust. l This is called the "credit assignment" problem (maybe should be "blame assignment").
Current Opinion in Neurobiology, 2015
Our ability to collect large amounts of data from many cells has been paralleled by the developme... more Our ability to collect large amounts of data from many cells has been paralleled by the development of powerful statistical models for extracting information from this data. Here we discuss how the activity of cell assemblies can be analyzed using these models, focusing on the generalized linear models and the maximum entropy models and describing a number of recent studies that employ these tools for analyzing multineuronal activity. We show results from simulations comparing inferred functional connectivity, pairwise correlations and the real synaptic connections in simulated networks demonstrating the power of statistical models in inferring functional connectivity. Further development of network reconstruction techniques based on statistical models should lead to more powerful methods of understanding functional anatomy of cell assemblies.
Physical Review E, 1999
A mean-field multi-spin interaction spin glass model is analyzed in the presence of a ferromagnet... more A mean-field multi-spin interaction spin glass model is analyzed in the presence of a ferromagnetic coupling. The static and dynamical phase diagrams contain four phases (paramagnet, spin glass, ordinary ferromagnet and glassy ferromagnet) and exhibit reentrant behavior. The glassy ferromagnet phase has anomalous dynamical properties. The results are consistent with a nonequilibrium thermodynamics that has been proposed for glasses.
Physical Review Letters, 2011
There has been recent progress on inferring the structure of interactions in complex networks whe... more There has been recent progress on inferring the structure of interactions in complex networks when they are in stationary states satisfying detailed balance, but little has been done for nonequilibrium systems. Here we introduce an approach to this problem, considering, as an example, the question of recovering the interactions in an asymmetrically-coupled, synchronously-updated Sherrington-Kirkpatrick model. We derive an exact iterative inversion algorithm and develop efficient approximations based on dynamical mean-field and Thouless-Anderson-Palmer equations that express the interactions in terms of equal-time and one time step-delayed correlation functions.
Physical Review E, 2004
We present a dynamical description and analysis of nonequilibrium transitions in the noisy one-di... more We present a dynamical description and analysis of nonequilibrium transitions in the noisy one-dimensional Ginzburg-Landau equation for an extensive system based on a weak noise canonical phase space formulation of the Freidlin-Wentzel or Martin-Siggia-Rose methods. We derive propagating nonlinear domain wall or soliton solutions of the resulting canonical field equations with superimposed diffusive modes. The transition pathways are characterized by the nucleation and subsequent propagation of domain walls. We discuss the general switching scenario in terms of a dilute gas of propagating domain walls and evaluate the Arrhenius factor in terms of the associated action. We find excellent agreement with recent numerical optimization studies.
Physical Review Letters, Mar 13, 1995
We study the off-equilibrium relaxational dynamics of the Amit-Roginsky φ 3 field theory, for whi... more We study the off-equilibrium relaxational dynamics of the Amit-Roginsky φ 3 field theory, for which the mode coupling approximation is exact. We show that complex phenomena such as aging and ergodicity breaking are present at low temperature, similarly to what is found in long range spin glasses. This is a generalization of mode coupling theory of the structural glass transition to off-equilibrium situations.
We review the use of mean field theory for describing the dynamics of dense, randomly connected c... more We review the use of mean field theory for describing the dynamics of dense, randomly connected cortical circuits. For a simple network of excitatory and inhibitory leaky integrate-and-fire neurons, we can show how the firing irregularity, as measured by the Fano factor, increases with the strength of the synapses in the network and with the value to which the membrane potential is reset after a spike. Generalizing the model to include conductance-based synapses gives insight into the connection between the firing statistics and the high-conductance state observed experimentally in visual cortex. Finally, an extension of the model to describe an orientation hypercolumn provides understanding of how cortical interactions sharpen orientation tuning, in a way that is consistent with observed firing statistics.
Neural Computation, Feb 1, 2010
Neuronal firing correlations are studied using simulations of a simple network model for a cortic... more Neuronal firing correlations are studied using simulations of a simple network model for a cortical column in a high-conductance state with dynamically balanced excitation and inhibition. Although correlations between individual pairs of neurons exhibit considerable heterogeneity, population averages show systematic behavior. When the network is in a stationary state, the average correlations are generically small: correlation coefficients are of order 1/N, where N is the number of neurons in the network. However, when the input to the network varies strongly in time, much larger values are found. In this situation, the network is out of balance, and the synaptic conductance is low, at times when the strongest firing occurs. However, examination of the correlation functions of synaptic currents reveals that after these bursts, balance is restored within a few milliseconds by a rapid increase in inhibitory synaptic conductance. These findings suggest an extension of the notion of the balanced state to include balanced fluctuations of synaptic currents, with a characteristic timescale of a few milliseconds.
The international journal of supercomputer applications, Dec 1, 1988
Neural computation is a style of computation that draws inspiration from the way the brain comput... more Neural computation is a style of computation that draws inspiration from the way the brain computes. It is an in trinsically collective paradigm characterized by high con nectivity among a very large number of simple pro cessors running in parallel, possibly asynchronously. Methods developed in the theory of many-particle systems can be brought to bear on important conceptual questions about the operation and programming of such computational assemblies. This paper reviews several basic problems that arise in this area: the mathematical formulation of the collective computation done by such a network and of algorithms for programming ("teaching") them. The importance of phase transitions for understanding the generic behavior of such systems and algorithms is emphasized.
Neurocomputing, Jun 1, 1999
BMC Neuroscience, Jul 1, 2010
Physical Review E, Sep 14, 2004
We present a dynamical description and analysis of non-equilibrium transitions in the noisy onedi... more We present a dynamical description and analysis of non-equilibrium transitions in the noisy onedimensional Ginzburg-Landau equation for an extensive system based on a weak noise canonical phase space formulation of the Freidlin-Wentzel or Martin-Siggia-Rose methods. We derive propagating nonlinear domain wall or soliton solutions of the resulting canonical field equations with superimposed diffusive modes. The transition pathways are characterized by the nucleations and subsequent propagation of domain walls. We discuss the general switching scenario in terms of a dilute gas of propagating domain walls and evaluate the Arrhenius factor in terms of the associated action. We find excellent agreement with recent numerical optimization studies.
Journal of Statistical Mechanics: Theory and Experiment, Mar 12, 2013
Neurons subject to a common non-stationary input may exhibit a correlated firing behavior. Correl... more Neurons subject to a common non-stationary input may exhibit a correlated firing behavior. Correlations in the statistics of neural spike trains also arise as the effect of interaction between neurons. Here we show that these two situations can be distinguished, with machine learning techniques, provided the data are rich enough. In order to do this, we study the problem of inferring a kinetic Ising model, stationary or nonstationary, from the available data. We apply the inference procedure to two data sets: one from salamander retinal ganglion cells and the other from a realistic computational cortical network model. We show that many aspects of the concerted activity of the salamander retinal neurons can be traced simply to the external input. A model of non-interacting neurons subject to a non-stationary external field outperforms a model with stationary input with couplings between neurons, even accounting for the differences in the number of model parameters. When couplings are added to the non-stationary model, for the retinal data, little is gained: the inferred couplings are generally not significant. Likewise, the distribution of the sizes of sets of neurons that spike simultaneously and the frequency of spike patterns as function of their rank (Zipf plots) are well-explained by an independent-neuron model with time-dependent external input, and adding connections to such a model does not offer significant improvement. For the cortical model data, robust couplings, well correlated with the real connections, can be inferred using the non-stationary model. Adding connections to this model slightly improves the agreement with the data for the probability of synchronous spikes but hardly affects the Zipf plot.
Physical Review E, Apr 7, 2015
American Journal of Physics, Jul 1, 1994
We can learn something about coding in large populations of neurons from models of the spike patt... more We can learn something about coding in large populations of neurons from models of the spike pattern distributions constructed from data. In our work, we do this for data generated from computational models of local cortical networks. This permits us to explore how features of the neuronal and synaptic properties of the network are related to those of the spike pattern distribution model. We employ the approach of Schneidman et al [1] and model this distribution by a Sherrington-Kirkpatrick (SK) model: P[S] = Z-1exp(½ΣijJijSiSj+ΣihiSi). In the work reported here, we analyze spike records from a simple model of a cortical column in a high-conductance state for two different cases: one with stationary tonic firing and the other with a rapidly time-varying input that produces rapid variations in firing rates. The average cross-correlation coefficient in the former is an order of magnitude smaller than that in the latter.To estimate the parameters Jij and hi we use a technique [2] based on inversion of the Thouless-Anderson-Palmer equations from spin glass theory. We have performed these fits for groups of neurons of sizes from 12 to 200 for tonic firing and from 6 to 800 for the case of the rapidly time-varying “stimulus”. The first two figures show that the distributions of Jij’s in the two cases are quite similar, both growing slightly narrower with increasing N. They are also qualitatively similar to those found by Schneidman et al and by Tkacik et al [3] for data from retinal networks. As in their work, it does not appear to be necessary to include higher order couplings. The means, which are much smaller than the standard deviations, also decrease with N, and the one for tonic firing is less than half that for the stimulus-driven network.However, the models obtained never appear to be in a spin glass phase for any of the sizes studied, in contrast to the finding of Tkacik et al, who reported spin glass behaviour at N=120. This is shown in the third figure panel. The x axis is 1/J, where J = N1/2std(Jij) and the y axis is H/J, where H is the total “field” N-1Σi(hi+ΣjJij‹Sj›). The green curve marks the Almeida-Thouless line separating the normal and spin glass phases in this parameter plane. All our data, for N ≤800 (the number of excitatory neurons in the originally-simulated network), lie in the normal region, and extrapolation from our results predicts spin glass behaviour only for N>5000.[1] E. Schneidman et al., Nature 440 1007-1012 (2006)[2] T. Tanaka, Phys Rev E 58 2302-2310 (1998); H. J. Kappen and F. B Rodriguez, Neural Comp 10 1137-1156 (1998)[3] G. Tkacik et al., arXiv:q-bio.NC/0611072 v1 (2006)
arXiv (Cornell University), Feb 18, 2000
We present a model of an olfactory system that performs odour segmentation. Based on the anatomy ... more We present a model of an olfactory system that performs odour segmentation. Based on the anatomy and physiology of natural olfactory systems, it consists of a pair of coupled modules, bulb and cortex. The bulb encodes the odour inputs as oscillating patterns. The cortex functions as an associative memory: when the input from the bulb matches a pattern stored in the connections between its units, the cortical units resonate in an oscillatory pattern characteristic of that odour. Further circuitry transforms this oscillatory signal to a slowly varying feedback to the bulb. This feedback implements olfactory segmentation by suppressing the bulbar response to the pre-existing odour, thereby allowing subsequent odours to be singled out for recognition.
Physical review, Oct 25, 2001
We study the Langevin dynamics of the standard random heteropolymer model by mapping the problem ... more We study the Langevin dynamics of the standard random heteropolymer model by mapping the problem to a supersymmetric field theory using the Martin-Siggia-Rose formalism. The resulting model is solved non-perturbatively employing a Gaussian variational approach. In constructing the solution, we assume that the chain is very long and impose the translational invariance which is expected to be present in the bulk of the globule by averaging over the center the of mass coordinate. In this way we derive equations of motion for the correlation and response functions C(t, t ′ ) and R(t, t ′ ). The order parameters are extracted from the asymptotic behavior of these functions. We find a dynamical phase diagram with frozen (glassy) and melted (ergodic) phases. In the glassy phase the system fails to reach equilibrium and exhibits aging of the type found in p-spin glasses. Within the approximations used in this study, the random heteropolymer model can be mapped to the problem of a manifold in a random potential with power law correlations.
arXiv (Cornell University), Mar 25, 2004
We present a complete mean field theory for a balanced state of a simple model of an orientation ... more We present a complete mean field theory for a balanced state of a simple model of an orientation hypercolumn. The theory is complemented by a description of a numerical procedure for solving the mean-field equations quantitatively. With our treatment, we can determine self-consistently both the firing rates and the firing correlations, without being restricted to specific neuron models. Here, we solve the analytically derived mean-field equations numerically for integrate-and-fire neurons. Several known key properties of orientation selective cortical neurons emerge naturally from the description: Irregular firing with statistics close to -but not restricted to -Poisson statistics; an almost linear gain function (firing frequency as a function of stimulus contrast) of the neurons within the network; and a contrast-invariant tuning width of the neuronal firing. We find that the irregularity in firing depends sensitively on synaptic strengths. If Fano factors are bigger than 1, then they are so for all stimulus orientations that elicit firing. We also find that the tuning of the noise in the input current is the same as the tuning of the external input, while that for the mean input current depends on both the external input and the intracortical connectivity.
Neural Information Processing Systems, Nov 30, 1992
We have trained networks of E -II units with short-range connections to simulate simple cellular ... more We have trained networks of E -II units with short-range connections to simulate simple cellular automata that exhibit complex or chaotic behaviour. Three levels of learning are possible (in decreasing order of difficulty): learning the underlying automaton rule, learning asymptotic dynamical behaviour, and learning to extrapolate the training history. The levels of learning achieved with and without weight sharing for different automata provide new insight into their dynamics.
l With a single neuron, it is not too hard to see how to adjust the weights based upon the error ... more l With a single neuron, it is not too hard to see how to adjust the weights based upon the error values. We've already seen a couple of ways. l With a multi-layer network, it is less obvious. For one thing, what is the "error" for the neurons in nonfinal layers? Without these, we don't know how to adjust. l This is called the "credit assignment" problem (maybe should be "blame assignment"). l Werbos, in his Harvard PhD thesis in 1974 found a method. l Rumelhart and McClelland, in 1985, discovered the method, presumably independently, and popularized it under the current name. l In mathematics, such methods are in the category of "optimization". The technique is gradient descent, as explained for Adalines. l However, the computation of the gradient is less clear. l With a single neuron, it is not too hard to see how to adjust the weights based upon the error values. We've already seen a couple of ways. l With a multi-layer network, it is less obvious. For one thing, what is the "error" for the neurons in non-final layers? Without these, we don't know how to adjust. l This is called the "credit assignment" problem (maybe should be "blame assignment").
Current Opinion in Neurobiology, 2015
Our ability to collect large amounts of data from many cells has been paralleled by the developme... more Our ability to collect large amounts of data from many cells has been paralleled by the development of powerful statistical models for extracting information from this data. Here we discuss how the activity of cell assemblies can be analyzed using these models, focusing on the generalized linear models and the maximum entropy models and describing a number of recent studies that employ these tools for analyzing multineuronal activity. We show results from simulations comparing inferred functional connectivity, pairwise correlations and the real synaptic connections in simulated networks demonstrating the power of statistical models in inferring functional connectivity. Further development of network reconstruction techniques based on statistical models should lead to more powerful methods of understanding functional anatomy of cell assemblies.
Physical Review E, 1999
A mean-field multi-spin interaction spin glass model is analyzed in the presence of a ferromagnet... more A mean-field multi-spin interaction spin glass model is analyzed in the presence of a ferromagnetic coupling. The static and dynamical phase diagrams contain four phases (paramagnet, spin glass, ordinary ferromagnet and glassy ferromagnet) and exhibit reentrant behavior. The glassy ferromagnet phase has anomalous dynamical properties. The results are consistent with a nonequilibrium thermodynamics that has been proposed for glasses.
Physical Review Letters, 2011
There has been recent progress on inferring the structure of interactions in complex networks whe... more There has been recent progress on inferring the structure of interactions in complex networks when they are in stationary states satisfying detailed balance, but little has been done for nonequilibrium systems. Here we introduce an approach to this problem, considering, as an example, the question of recovering the interactions in an asymmetrically-coupled, synchronously-updated Sherrington-Kirkpatrick model. We derive an exact iterative inversion algorithm and develop efficient approximations based on dynamical mean-field and Thouless-Anderson-Palmer equations that express the interactions in terms of equal-time and one time step-delayed correlation functions.
Physical Review E, 2004
We present a dynamical description and analysis of nonequilibrium transitions in the noisy one-di... more We present a dynamical description and analysis of nonequilibrium transitions in the noisy one-dimensional Ginzburg-Landau equation for an extensive system based on a weak noise canonical phase space formulation of the Freidlin-Wentzel or Martin-Siggia-Rose methods. We derive propagating nonlinear domain wall or soliton solutions of the resulting canonical field equations with superimposed diffusive modes. The transition pathways are characterized by the nucleation and subsequent propagation of domain walls. We discuss the general switching scenario in terms of a dilute gas of propagating domain walls and evaluate the Arrhenius factor in terms of the associated action. We find excellent agreement with recent numerical optimization studies.
Physical Review Letters, Mar 13, 1995
We study the off-equilibrium relaxational dynamics of the Amit-Roginsky φ 3 field theory, for whi... more We study the off-equilibrium relaxational dynamics of the Amit-Roginsky φ 3 field theory, for which the mode coupling approximation is exact. We show that complex phenomena such as aging and ergodicity breaking are present at low temperature, similarly to what is found in long range spin glasses. This is a generalization of mode coupling theory of the structural glass transition to off-equilibrium situations.
We review the use of mean field theory for describing the dynamics of dense, randomly connected c... more We review the use of mean field theory for describing the dynamics of dense, randomly connected cortical circuits. For a simple network of excitatory and inhibitory leaky integrate-and-fire neurons, we can show how the firing irregularity, as measured by the Fano factor, increases with the strength of the synapses in the network and with the value to which the membrane potential is reset after a spike. Generalizing the model to include conductance-based synapses gives insight into the connection between the firing statistics and the high-conductance state observed experimentally in visual cortex. Finally, an extension of the model to describe an orientation hypercolumn provides understanding of how cortical interactions sharpen orientation tuning, in a way that is consistent with observed firing statistics.