Linear Hopfield networks and constrained optimization (original) (raw)

Computational Properties of Generalized Hopfield Networks Applied to Nonlinear Optimization

1990

A nonlinear neural framework, called the Generalized Hopfield Network, is proposed, which is able to solve in a parallel distributed manner systems of nonlinear equations. The method is applied to the general nonlinear optimization problem. We demonstrate GHNs implementing the three most important optimization algorithms, namely the Augmented Lagrangian, Generalized Reduced Gradient and Successive Quadratic Programming methods. The study results in a dynamic view of the optimization problem and offers a straightforward model for the parallelization of the optimization computations, thus significantly extending the practical limits of problems that can be formulated as an optimization problem and which can gain from the introduction of nonlinearities in their structure (eg. pattern recognition, supervised learning, design of contentaddressable memories).

Generalized Hopfield Networks and Nonlinear Optimization

A nonlinear neural framework, called the Generalized Hopfield network, is proposed, which is able to solve in a parallel distributed manner systems of nonlinear equations. The method is applied to the general nonlinear optimization problem. We demonstrate GHNs implementing the three most important optimization algorithms, namely the Augmented Lagrangian, Generalized Reduced Gradient and Successive Quadratic Programming methods. The study results in a dynamic view of the optimization problem and offers a straightforward model for the parallelization of the optimization computations, thus significantly extending the practical limits of problems that can be formulated as an optimization problem and which can gain from the introduction of nonlinearities in their structure (eg. pattern recognition, supervised learning, design of content-addressable memories).

Fast hopfield neural networks using subspace projections

Neurocomputing, 2010

Hopfield Neural Networks are well-suited to the fast solution of complex optimization problems. Their application to real problems usually requires the satisfaction of a set of linear constraints that can be incorporated with an additional violation term. Another option proposed in the literature lies in confining the search space onto the subspace of constraints in such a way that the neuron outputs always satisfy the imposed restrictions. This paper proposes a computationally efficient subspace projection method that also includes variable updating step mechanisms. Some numerical experiments are used to verify the good performance and fast convergence of the new method.

Constrained Hopfield neural network for real-time predictive control

1994

The hardware implementation of an optimization network with restrictions to perform real-time Generalized Predictive Control (GPC) is described. The use of space-efficient stochastic architecture allows a a realization on a programmable logic device. As a result a programmable neural chip coprocessor that solves optimization problems subject to restrictions has been developped. Expressions for network parameters are provided to implement GPC. An adaptive controller is achieved using RAM memories to store the network parameters. Experimental results from a simple implementation of the controller are included.

A fast adaptive algorithm for Hopfield neural network

2003

This paper presents a gradient-based algorithm to speed up the convergence of the Hopfield neural network. To archive this, we introduce an individual step size η, which is adapted according to the gradient information. The algorithm is applied to some benchmark problems, extensive simulations are performed and its effectiveness is confirmed.

Hopfield neural networks for optimization: study of the different dynamics

Neurocomputing, 2002

In this paper the application of arbitrary order Hop"eld-like neural networks to optimization problems is studied. These networks are classi"ed in three categories according to their dynamics, expliciting the energy function for each category. The main problems a!ecting practical applications of these networks are brought to light: (a) Incoherence between the network dynamics and the associated energy function; (b) Error due to discrete simulation on a digital computer of the continuous dynamics equations; (c) Existence of local minima; (d) Convergence depends on the coe$cients weighting the cost function terms. The e!ect of these problems on each network is analysed and simulated, indicating possible solutions. Finally, the called continuous dynamics II is dealt with, proving that the integral term in the energy function is bounded, in contrast with Hop"eld's statement, and proposing an e$cient local minima avoidance strategy. Experimental results are obtained solving Diophantine equation, Hamiltonian cycle and k-colorability problems.

Hopfield Network as Static Optimizer: Learning the Weights and Eliminating the Guesswork

Neural Processing Letters, 2008

This article presents a simulation study for validation of an adaptation methodology for learning weights of a Hopfield neural network configured as a static optimizer. The quadratic Liapunov function associated with the Hopfield network dynamics is leveraged to map the set of constraints associated with a static optimization problem. This approach leads to a set of constraint-specific penalty or weighting coefficients whose values need to be defined. The methodology leverages a learning-based approach to define values of constraint weighting coefficients through adaptation. These values are in turn used to compute values of network weights, effectively eliminating the guesswork in defining weight values for a given static optimization problem, which has been a long-standing challenge in artificial neural networks. The simulation study is performed using the Traveling Salesman problem from the domain of combinatorial optimization. Simulation results indicate that the adaptation procedure is able to guide the Hopfield network towards solutions of the problem starting with random values for weights and constraint weighting coefficients. At the conclusion of the adaptation phase, the Hopfield network acquires weight values which readily position the network to search for local minimum solutions. The demonstrated successful application of the adaptation procedure eliminates the need to guess or predetermine the values for weights of the Hopfield network.

Neural Networks to Solve Nonlinear Inverse Kinematic Problems

Advances in computational intelligence and robotics book series, 2018

In making neural networks learn nonlinear relations effectively, it is desired to have appropriate training sets. In the proposed method, after a certain number of iterations, input-output pairs having worse errors are extracted from the original training set and form a new temporary set. From the following iteration, the temporary set is applied to the neural networks instead of the original set. In this case, only pairs with worse errors are used for updating the weights until the mean value of errors decreases to a desired level. Once the learning is conducted using the temporary set, the original set is applied again instead of the temporary set. The effectiveness of the proposed approach is demonstrated through simulations using kinematic models of a leg module with a serial link structure and an industrial robot.

A contribution to the use of Hopfield neural networks for parameter estimation

2007 European Control Conference (ECC), 2007

This paper presents a contribution to the use of Hopfield neural networks (HNNs) for parameter estimation. Our focus is on time-invariant systems that are linear in the parameters. We introduce a suitable HNN and present a weaker condition than the currently existing ones that guarantees the convergence of the parameterization estimated by the network to the actual parameterization. The application of our results is illustrated in a parameter estimation problem for a two carts system.

Solving Inverse Kinematics Using Neural Networks

A classical problem in robotic control is solving the inverse kinematics. This problem is also a complex one as there are no general algorithms that can be used to solve a set of nonlinear equations. The traditional approaches to solve a robot's inverse kinematics are geometric, algebraic, and numerical. Given there is not a single method to solve for the inverse kinematics it can be quite difficult, as there are multiple solutions for the robot to achieve a desired configuration. This paper will describe the method of using neural networks to solve the inverse kinematics for a simple planar robot. A prescribed path for the robot's end effector will be defined, with the x and y coordinates as well as the orientation being the input to the neural network, and the output will be the joint angles required to follow the desired trajectory. The results obtained from the neural network will be compared to the results of an algebraic solution.