Fast hopfield neural networks using subspace projections (original) (raw)

Linear Hopfield networks and constrained optimization

IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics), 1999

It is shown that a Hopfield neural network (with linear transfer functions) augmented by an additional feedforward layer can be used to compute the Moore-Penrose Generalized Inverse of a matrix. The resultant augmented linear Hopfield network can be used to solve an arbitrary set of linear equations or, alternatively, to solve a constrained least squares optimization problem. Applications in signal processing and robotics are considered. In the former case the augmented linear Hopfield network is used to estimate the "structured noise" component of a signal and adjust the parameters of an appropriate filter on-line, and in the latter case it is used to implement an on-line solution to the inverse kinematics problem via a Jacobi algorithm.

Convergence and stability study of Hopfield's neural network for linear programming

1992

Parameters that affect stability and convergence of the Hopfield model were identified by simulation. The Hopfield model used to solve optimization problems was defined by an analog electrical circuit. The authors illustrate that by introducing one additional amplifier a convergence with a good stability can be obtained. It is shown that convergence and stability can be obtained without oscillations. This novel model was used to solve a linear programming problem. Some results are presented

Hopfield neural networks for optimization: study of the different dynamics

Neurocomputing, 2002

In this paper the application of arbitrary order Hop"eld-like neural networks to optimization problems is studied. These networks are classi"ed in three categories according to their dynamics, expliciting the energy function for each category. The main problems a!ecting practical applications of these networks are brought to light: (a) Incoherence between the network dynamics and the associated energy function; (b) Error due to discrete simulation on a digital computer of the continuous dynamics equations; (c) Existence of local minima; (d) Convergence depends on the coe$cients weighting the cost function terms. The e!ect of these problems on each network is analysed and simulated, indicating possible solutions. Finally, the called continuous dynamics II is dealt with, proving that the integral term in the energy function is bounded, in contrast with Hop"eld's statement, and proposing an e$cient local minima avoidance strategy. Experimental results are obtained solving Diophantine equation, Hamiltonian cycle and k-colorability problems.

Computational Properties of Generalized Hopfield Networks Applied to Nonlinear Optimization

1990

A nonlinear neural framework, called the Generalized Hopfield Network, is proposed, which is able to solve in a parallel distributed manner systems of nonlinear equations. The method is applied to the general nonlinear optimization problem. We demonstrate GHNs implementing the three most important optimization algorithms, namely the Augmented Lagrangian, Generalized Reduced Gradient and Successive Quadratic Programming methods. The study results in a dynamic view of the optimization problem and offers a straightforward model for the parallelization of the optimization computations, thus significantly extending the practical limits of problems that can be formulated as an optimization problem and which can gain from the introduction of nonlinearities in their structure (eg. pattern recognition, supervised learning, design of contentaddressable memories).

Generalized Hopfield Networks and Nonlinear Optimization

A nonlinear neural framework, called the Generalized Hopfield network, is proposed, which is able to solve in a parallel distributed manner systems of nonlinear equations. The method is applied to the general nonlinear optimization problem. We demonstrate GHNs implementing the three most important optimization algorithms, namely the Augmented Lagrangian, Generalized Reduced Gradient and Successive Quadratic Programming methods. The study results in a dynamic view of the optimization problem and offers a straightforward model for the parallelization of the optimization computations, thus significantly extending the practical limits of problems that can be formulated as an optimization problem and which can gain from the introduction of nonlinearities in their structure (eg. pattern recognition, supervised learning, design of content-addressable memories).

Fast-Convergent Double-Sigmoid Hopfield Neural Network as Applied to Optimization Problems

The Hopfield neural network (HNN) has been widely used in numerous different optimization problems since the early 1980s. The convergence speed of the HNN (already in high gain) eventually plays a critical role in various real-time applications. In this brief, we propose and analyze a generalized HNN which drastically improves the convergence speed of the network, and thus allows benefiting from the HNN capabilities in solving the optimization problems in real time. By examining the channel allocation optimization problem in cellular radio systems, which is NP-complete and in which fast solution is necessary due to time-varying link gains, as well as the associative memory problem, computer simulations confirm the dramatic improvement in convergence speed at the expense of using a second nonlinear function in the proposed network.

A fast adaptive algorithm for Hopfield neural network

2003

This paper presents a gradient-based algorithm to speed up the convergence of the Hopfield neural network. To archive this, we introduce an individual step size η, which is adapted according to the gradient information. The algorithm is applied to some benchmark problems, extensive simulations are performed and its effectiveness is confirmed.

On the performance of the LP-guided Hopfield network-genetic algorithm

Computers & Operations Research, 2009

This paper proposes a linear programming (LP)-guided Hopfield-genetic algorithm for a class of combinatorial optimization problems which admit a 0-1 integer linear programming. The algorithm modifies the updating order of the binary Hopfield network in order to obtain better performance of the complete hybrid approach. We theoretically analyze several different updating orders proposed. We also include in the paper a novel proposal to guide the Hopfield network using the crossover and mutation operators of the genetic algorithm. Experimental evidences that show the good performance of the proposed approach in two different combinatorial optimization problems are also included in the paper.

Hopfield Network as Static Optimizer: Learning the Weights and Eliminating the Guesswork

Neural Processing Letters, 2008

This article presents a simulation study for validation of an adaptation methodology for learning weights of a Hopfield neural network configured as a static optimizer. The quadratic Liapunov function associated with the Hopfield network dynamics is leveraged to map the set of constraints associated with a static optimization problem. This approach leads to a set of constraint-specific penalty or weighting coefficients whose values need to be defined. The methodology leverages a learning-based approach to define values of constraint weighting coefficients through adaptation. These values are in turn used to compute values of network weights, effectively eliminating the guesswork in defining weight values for a given static optimization problem, which has been a long-standing challenge in artificial neural networks. The simulation study is performed using the Traveling Salesman problem from the domain of combinatorial optimization. Simulation results indicate that the adaptation procedure is able to guide the Hopfield network towards solutions of the problem starting with random values for weights and constraint weighting coefficients. At the conclusion of the adaptation phase, the Hopfield network acquires weight values which readily position the network to search for local minimum solutions. The demonstrated successful application of the adaptation procedure eliminates the need to guess or predetermine the values for weights of the Hopfield network.