Neural networks for solving second-order cone constrained variational inequality problem (original) (raw)
Related papers
This paper proposes a neural network approach for efficiently solving general nonlinear convex programs with second-order cone constraints. The proposed neural network model was developed based on a smoothed natural residual merit function involving an unconstrained minimization reformulation of the complementarity problem. We study the existence and convergence of the trajectory of the neural network. Moreover, we show some stability properties for the considered neural network, such as the Lyapunov stability, asymptotic stability, and exponential stability. The examples in this paper provide a further demonstration of the effectiveness of the proposed neural network. This paper can be viewed as a follow-up version of because more stability results are obtained.
Ieee Transactions on Neural Networks and Learning Systems, 2014
This paper presents one-layer projection neural networks based on projection operators for solving constrained variational inequalities and related optimization problems. Sufficient conditions for global convergence of the proposed neural networks are provided based on Lyapunov stability. Compared with the existing neural networks for variational inequalities and optimization, the proposed neural networks have lower model complexities. In addition, some improved criteria for global convergence are given. Compared with our previous work, a design parameter has been added in the projection neural network models, and it results in some improved performance. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural networks.
Mathematical Problems in Engineering
This paper focuses on solving the quadratic programming problems with second-order cone constraints (SOCQP) and the second-order cone constrained variational inequality (SOCCVI) by using the neural network. More specifically, a neural network model based on two discrete-type families of SOC complementarity functions associated with second-order cone is proposed to deal with the Karush-Kuhn-Tucker (KKT) conditions of SOCQP and SOCCVI. The two discrete-type SOC complementarity functions are newly explored. The neural network uses the two discrete-type families of SOC complementarity functions to achieve two unconstrained minimizations which are the merit functions of the Karuch-Kuhn-Tucker equations for SOCQP and SOCCVI. We show that the merit functions for SOCQP and SOCCVI are Lyapunov functions and this neural network is asymptotically stable. The main contribution of this paper lies on its simulation part because we observe a different numerical performance from the existing one. I...
A Neural Network Based on the Metric Projector for Solving SOCCVI Problem
IEEE Transactions on Neural Networks and Learning Systems, 2020
We propose an efficient neural network for solving the second-order cone constrained variational inequality (SOC-CVI for short). The network is constructed using the Karush-Kuhn-Tucker (KKT) conditions of the variational inequality, which is used to recast the SOCCVI as a system of equations by using a smoothing function for the metric projection mapping to deal with the complementarity condition. Aside from standard stability results, we explore second-order sufficient conditions to obtain exponential stability. Specifically, we prove the nonsingularity of the Jacobian of the KKT system based on the second-order sufficient condition and constraint nondegeneracy. Finally, we present some numerical experiments illustrating the efficiency of the neural network in solving SOCCVI problems. Our numerical simulations reveal that in general, the new neural network is more dominant than all other neural networks in the SOCCVI literature in terms of stability and convergence rates of trajectories to SOCCVI solution.
A neural network for monotone variational inequalities with linear constraints
Physics Letters A, 2003
Variational inequality is a uniform approach for many important optimization and equilibrium problems. Based on the necessary and sufficient conditions for the solution, this Letter presents a neural network model for solving linearly constrained variational inequalities. Several sufficient conditions are provided to ensure the asymptotic stability of the proposing network. There is no need to estimate the Lipschitz constant, and no extra parameter is introduced. Since the sufficient conditions provided in this Letter can be easily checked in practice, these new results have both theoretical and application values. The validity and transient behavior of the proposing neural network are demonstrated by some numerical examples.
Neurocomputing, 2016
This paper proposes a neural network approach to efficiently solve nonlinear convex programs with the second-order cone constraints. The neural network model is designed by the generalized Fischer-Burmeister function associated with second-order cone. We study the existence and convergence of the trajectory for the considered neural network. Moreover, we also show stability properties for the considered neural network, including the Lyapunov stability, the asymptotic stability and the exponential stability. Illustrative examples give a further demonstration for the effectiveness of the proposed neural network. Numerical performance based on the parameter being perturbed and numerical comparison with other neural network models are also provided. In overall, our model performs better than two comparative methods.
Recurrent neural networks for solving second-order cone programs
Neurocomputing, 2011
This paper proposes using the neural networks to efficiently solve the second-order cone programs (SOCP). To establish the neural networks, the SOCP is first reformulated as a second-order cone complementarity problem (SOCCP) with the Karush-Kuhn-Tucker conditions of the SOCP. The SOCCP functions, which transform the SOCCP into a set of nonlinear equations, are then utilized to design the neural networks. We propose two kinds of neural networks with the different SOCCP functions. The first neural network uses the Fischer-Burmeister function to achieve an unconstrained minimization with a merit function. We show that the merit function is a Lyapunov function and this neural network is asymptotically stable. The second neural network utilizes the natural residual function with the cone projection function to achieve low computation complexity. It is shown to be Lyapunov stable and converges globally to an optimal solution under some condition. The SOCP simulation results demonstrate the effectiveness of the proposed neural networks.
Neurocomputing, 2009
This paper presents two neural networks to find the optimal point in convex optimization problems and variational inequality problems, respectively. The domain of the functions that define the problems is a convex set, which is determined by convex inequality constraints and affine equality constraints. The neural networks are based on gradient descent and exact penalization and the convergence analysis is based on a control Liapunov function analysis, since the dynamical system corresponding to each neural network may be viewed as a so-called variable structure closed loop control system.
A Recurrent Neural Network for Solving a Class of General Variational Inequalities
IEEE Transactions on Systems, Man, and Cybernetics, 2007
In this paper, we propose a penalty-based recurrent neural network for solving a class of constrained optimization problems with generalized convex objective functions. The model has a simple structure described by using a differential inclusion. It is also applicable for any nonsmooth optimization problem with affine equality and convex inequality constraints, provided that the objective function is regular and pseudoconvex on feasible region of the problem. It is proven herein that the state vector of the proposed neural network globally converges to and stays thereafter in the feasible region in finite time, and converges to the optimal solution set of the problem.
Neural Networks for Solving Constrained Optimization Problems
In this paper we consider several Neural Network architectures for solving constrained optimization problems with inequality constrains. We present a new architecture based on the exact penalty function approach. Simulation results based on SIMULINK ® models are given and compared.