Van Hien Nguyen - Academia.edu (original) (raw)
Papers by Van Hien Nguyen
Journal of Global Optimization, 2008
We consider Nash–Cournot oligopolistic market equilibrium models with concave cost functions. Con... more We consider Nash–Cournot oligopolistic market equilibrium models with concave cost functions. Concavity implies, in general, that a local equilibrium point is not necessarily a global one. We give conditions for existence of global equilibrium points. We then propose an algorithm for finding a global equilibrium point or for detecting that the problem is unsolvable. Numerical experiments on some randomly generated data show efficiency of the proposed algorithm.
Engineering Optimization, 1987
Abstract This paper is concerned with the convex linearization method recently proposed by Fleury... more Abstract This paper is concerned with the convex linearization method recently proposed by Fleury and Braibant for structural optimization. We give here a mathematical convergence analysis or this method. We also discuss some modifications of it.
Optimization, 2008
We make use of the auxiliary problem principle to develop iterative algorithms for solving equili... more We make use of the auxiliary problem principle to develop iterative algorithms for solving equilibrium problems. The first one is an extension of the extragradient algorithm to equilibrium problems. In this algorithm the equilibrium bifunction is not required to satisfy any monotonicity property, but it must satisfy a certain Lipschitztype condition. To avoid this requirement we propose linesearch procedures commonly used in variational inequalities to obtain projection-type algorithms for solving equilibrium problems. Applications to mixed variational inequalities are discussed. A special class of equilibrium problems is investigated and some preliminary computational results are reported.
Journal of Optimization Theory and Applications, 2009
In this paper, ε-optimality conditions are given for a nonconvex programming problem which has an... more In this paper, ε-optimality conditions are given for a nonconvex programming problem which has an infinite number of constraints. The objective function and the constraint functions are supposed to be locally Lipschitz on a Banach space. In a first part, we introduce the concept of regular ε-solution and propose a generalization of the Karush-Kuhn-Tucker conditions. These conditions are up to ε and are obtained by weakening the classical complementarity conditions. Furthermore, they are satisfied without assuming any constraint qualification. Then, we prove that these conditions are also sufficient for ε-optimality when the constraints are convex and the objective function is ε-semiconvex. In a second part, we define quasisaddlepoints associated with an ε-Lagrangian functional and we investigate their relationships with the generalized KKT conditions. In particular, we formulate a Wolfe-type dual problem which allows us to present ε-duality theorems and relationships between the KKT conditions and regular ε-solutions for the dual. Finally, we apply these results to two important infinite programming problems: the cone-constrained convex problem and the semidefinite programming problem.
Journal of Optimization Theory and Applications, 1996
In recent years, the so-called auxiliary problem principle has been used to derive many iterative... more In recent years, the so-called auxiliary problem principle has been used to derive many iterative type algorithms for solving optimal control, mathematical programming, and variational inequality problems. In the present paper, we use this principle in conjunction with the epiconvergence theory to introduce and study a general family of perturbation methods for solving nonlinear variational inequalities over a product space of reflexive Banach spaces. We do not assume that the monotone operator involved in our general variational inequality problem is of potential type. Several known iterative algorithms, which can be obtained from our theory, are also discussed.
Journal of Global Optimization, 2008
In this paper, we present several new implementable methods for solving a generalized fractional ... more In this paper, we present several new implementable methods for solving a generalized fractional program with convex data. They are Dinkelbach-type methods where a prox-regularization term is added to avoid the numerical difficulties arising when the solution of the problem is not unique. In these methods, at each iteration a regularized parametric problem is solved inexactly to obtain an approximation of the optimal value of the problem. Since the parametric problem is nonsmooth and convex, we propose to solve it by using a classical bundle method where the parameter is updated after each ‘serious step’. We mainly study two kinds of such steps, and we prove the convergence and the rate of convergence of each of the corresponding methods. Finally, we present some numerical experience to illustrate the behavior of the proposed algorithms, and we discuss the practical efficiency of each one.
Journal of Optimization Theory and Applications, 1979
A well-known approach to constrained minimization is via a sequence of unconstrained optimization... more A well-known approach to constrained minimization is via a sequence of unconstrained optimization computations applied to a penalty function. This paper shows how it is possible to generalize Murphy's penalty method for differentiable problems of mathematical programming (Ref. 1) to solve nondifferentiable problems of finding saddle points with constraints. As in mathematical programming, it is shown that the method has the advantages of both Fiacco and McCormick exterior and interior penalty methods (Ref. 2). Under mild assumptions, the method has the desirable property that all trial solutions become feasible after a finite number of iterations. The rate of convergence is also presented. It should be noted that the results presented here have been obtained without making any use of differentiability assumptions.
This paper considers some convergence aspects of an optimization algorithm, whose basic idea is c... more This paper considers some convergence aspects of an optimization algorithm, whose basic idea is closely related to penalty and augmented Lagrangian methods, proposed by Kort and Bertsekas in 1972. We prove, without convexity assumptions, that the algorithm has a parametrically superlinear root convergence rate. We also give a partial global convergence result for the algorithm considered.
Journal of Global Optimization, 2010
We consider a generalized equilibrium problem involving DC functions which is called (GEP). For t... more We consider a generalized equilibrium problem involving DC functions which is called (GEP). For this problem we establish two new dual formulations based on Toland-Fenchel-Lagrange duality for DC programming problems. The first one allows us to obtain a unified dual analysis for many interesting problems. So, this dual coincides with the dual problem proposed by Martinez-Legaz and Sosa (J Glob Optim 25:311–319, 2006) for equilibrium problems in the sense of Blum and Oettli. Furthermore it is equivalent to Mosco’s dual problem (Mosco in J Math Anal Appl 40:202–206, 1972) when applied to a variational inequality problem. The second dual problem generalizes to our problem another dual scheme that has been recently introduced by Jacinto and Scheimberg (Optimization 57:795–805, 2008) for convex equilibrium problems. Through these schemes, as by products, we obtain new optimality conditions for (GEP) and also, gap functions for (GEP), which cover the ones in Antangerel et al. (J Oper Res 24:353–371, 2007, Pac J Optim 2:667–678, 2006) for variational inequalities and standard convex equilibrium problems. These results, in turn, when applied to DC and convex optimization problems with convex constraints (considered as special cases of (GEP)) lead to Toland-Fenchel-Lagrange duality for DC problems in Dinh et al. (Optimization 1–20, 2008, J Convex Anal 15:235–262, 2008), Fenchel-Lagrange and Lagrange dualities for convex problems as in Antangerel et al. (Pac J Optim 2:667–678, 2006), Bot and Wanka (Nonlinear Anal to appear), Jeyakumar et al. (Applied Mathematics research report AMR04/8, 2004). Besides, as consequences of the main results, we obtain some new optimality conditions for DC and convex problems.
Mathematical Programming, 2009
We present a bundle method for solving nonsmooth convex equilibrium problems based on the auxilia... more We present a bundle method for solving nonsmooth convex equilibrium problems based on the auxiliary problem principle. First, we consider a general algorithm that we prove to be convergent. Then we explain how to make this algorithm implementable. The strategy is to approximate the nonsmooth convex functions by piecewise linear convex functions in such a way that the subproblems are easy to solve and the convergence is preserved. In particular, we introduce a stopping criterion which is satisfied after finitely many iterations and which gives rise to Δ-stationary points. Finally, we apply our implementable algorithm for solving the particular case of singlevalued and multivalued variational inequalities and we find again the results obtained recently by Salmon et al. [18].
Journal of Optimization Theory and Applications, 2004
We consider an extension of the auxiliary problem principle for solving a general variational ine... more We consider an extension of the auxiliary problem principle for solving a general variational inequality problem. This problem consists in finding a zero of the sum of two operators defined on a real Hilbert space H: the first is a monotone single-valued operator; the second is the subdifferential of a lower semicontinuous proper convex function ϕ. To make the subproblems easier to solve, we consider two kinds of lower approximations for the function ϕ: a smooth approximation and a piecewise linear convex approximation. We explain how to construct these approximations and we prove the weak convergence and the strong convergence of the sequence generated by the corresponding algorithms under a pseudo Dunn condition on the single-valued operator. Finally, we report some numerical experiences to illustrate the behavior of the two algorithms.
Siam Journal on Control and Optimization, 1991
Journal of Optimization Theory and Applications, 1979
Recently, Kort and Bertsekas (Ref. 1) and Hartman (Ref. 2) presented independently a new penalty ... more Recently, Kort and Bertsekas (Ref. 1) and Hartman (Ref. 2) presented independently a new penalty function algorithm of exponential type for solving inequality-constrained minimization problems. The main purpose of this work is to give a proof on the rate of convergence of a modification of the exponential penalty method proposed by these authors. We show that the sequence of points generated by the modified algorithm converges to the solution of the original nonconvex problem linearly and that the sequence of estimates of the optimal Lagrange multiplier converges to this multiplier superlinearly. The question of convergence of the modified method is discussed. The present paper hinges on ideas of Mangasarian (Ref. 3), but the case considered here is not covered by Mangasarian's theory.
Journal of Optimization Theory and Applications, 2005
We apply the Banach contraction-mapping fixed-point principle for solving multivalued strongly mo... more We apply the Banach contraction-mapping fixed-point principle for solving multivalued strongly monotone variational inequalities. Then, we couple this algorithm with the proximal-point method for solving monotone multivalued variational inequalities. We prove the convergence rate of this algorithm and report some computational results.
Journal of Optimization Theory and Applications, 2000
Many algorithms for solving variational inequality problems can be derived from the auxiliary pro... more Many algorithms for solving variational inequality problems can be derived from the auxiliary problem principle introduced several years ago by Cohen. In recent years, the convergence of these algorithms has been established under weaker and weaker monotonicity assumptions: strong (pseudo) monotonicity has been replaced by the (pseudo) Dunn property. Moreover, well-suited assumptions have given rise to local versions of these results. In this paper, we combine the auxiliary problem principle with epiconvergence theory to present and study a basic family of perturbed methods for solving general variational inequalities. For example, this framework allows us to consider barrier functions and interior approximations of feasible domains. Our aim is to emphasize the global or local assumptions to be satisfied by the perturbed functions in order to derive convergence results similar to those without perturbations. In particular, we generalize previous results obtained by Makler-Scheimberg et al.
Journal of Global Optimization, 2009
In this article we present a new and efficient method for solving equilibrium problems on polyhed... more In this article we present a new and efficient method for solving equilibrium problems on polyhedra. The method is based on an interior-quadratic proximal term which replaces the usual quadratic proximal term. This leads to an interior proximal type algorithm. Each iteration consists in a prediction step followed by a correction step as in the extragradient method. In a first algorithm each of these steps is obtained by solving an unconstrained minimization problem, while in a second algorithm the correction step is replaced by an Armijo-backtracking linesearch followed by an hyperplane projection step. We prove that our algorithms are convergent under mild assumptions: pseudomonotonicity for the two algorithms and a Lipschitz property for the first one. Finally we present some numerical experiments to illustrate the behavior of the proposed algorithms.
Siam Journal on Optimization, 2004
Journal of Global Optimization, 2008
We consider Nash–Cournot oligopolistic market equilibrium models with concave cost functions. Con... more We consider Nash–Cournot oligopolistic market equilibrium models with concave cost functions. Concavity implies, in general, that a local equilibrium point is not necessarily a global one. We give conditions for existence of global equilibrium points. We then propose an algorithm for finding a global equilibrium point or for detecting that the problem is unsolvable. Numerical experiments on some randomly generated data show efficiency of the proposed algorithm.
Engineering Optimization, 1987
Abstract This paper is concerned with the convex linearization method recently proposed by Fleury... more Abstract This paper is concerned with the convex linearization method recently proposed by Fleury and Braibant for structural optimization. We give here a mathematical convergence analysis or this method. We also discuss some modifications of it.
Optimization, 2008
We make use of the auxiliary problem principle to develop iterative algorithms for solving equili... more We make use of the auxiliary problem principle to develop iterative algorithms for solving equilibrium problems. The first one is an extension of the extragradient algorithm to equilibrium problems. In this algorithm the equilibrium bifunction is not required to satisfy any monotonicity property, but it must satisfy a certain Lipschitztype condition. To avoid this requirement we propose linesearch procedures commonly used in variational inequalities to obtain projection-type algorithms for solving equilibrium problems. Applications to mixed variational inequalities are discussed. A special class of equilibrium problems is investigated and some preliminary computational results are reported.
Journal of Optimization Theory and Applications, 2009
In this paper, ε-optimality conditions are given for a nonconvex programming problem which has an... more In this paper, ε-optimality conditions are given for a nonconvex programming problem which has an infinite number of constraints. The objective function and the constraint functions are supposed to be locally Lipschitz on a Banach space. In a first part, we introduce the concept of regular ε-solution and propose a generalization of the Karush-Kuhn-Tucker conditions. These conditions are up to ε and are obtained by weakening the classical complementarity conditions. Furthermore, they are satisfied without assuming any constraint qualification. Then, we prove that these conditions are also sufficient for ε-optimality when the constraints are convex and the objective function is ε-semiconvex. In a second part, we define quasisaddlepoints associated with an ε-Lagrangian functional and we investigate their relationships with the generalized KKT conditions. In particular, we formulate a Wolfe-type dual problem which allows us to present ε-duality theorems and relationships between the KKT conditions and regular ε-solutions for the dual. Finally, we apply these results to two important infinite programming problems: the cone-constrained convex problem and the semidefinite programming problem.
Journal of Optimization Theory and Applications, 1996
In recent years, the so-called auxiliary problem principle has been used to derive many iterative... more In recent years, the so-called auxiliary problem principle has been used to derive many iterative type algorithms for solving optimal control, mathematical programming, and variational inequality problems. In the present paper, we use this principle in conjunction with the epiconvergence theory to introduce and study a general family of perturbation methods for solving nonlinear variational inequalities over a product space of reflexive Banach spaces. We do not assume that the monotone operator involved in our general variational inequality problem is of potential type. Several known iterative algorithms, which can be obtained from our theory, are also discussed.
Journal of Global Optimization, 2008
In this paper, we present several new implementable methods for solving a generalized fractional ... more In this paper, we present several new implementable methods for solving a generalized fractional program with convex data. They are Dinkelbach-type methods where a prox-regularization term is added to avoid the numerical difficulties arising when the solution of the problem is not unique. In these methods, at each iteration a regularized parametric problem is solved inexactly to obtain an approximation of the optimal value of the problem. Since the parametric problem is nonsmooth and convex, we propose to solve it by using a classical bundle method where the parameter is updated after each ‘serious step’. We mainly study two kinds of such steps, and we prove the convergence and the rate of convergence of each of the corresponding methods. Finally, we present some numerical experience to illustrate the behavior of the proposed algorithms, and we discuss the practical efficiency of each one.
Journal of Optimization Theory and Applications, 1979
A well-known approach to constrained minimization is via a sequence of unconstrained optimization... more A well-known approach to constrained minimization is via a sequence of unconstrained optimization computations applied to a penalty function. This paper shows how it is possible to generalize Murphy's penalty method for differentiable problems of mathematical programming (Ref. 1) to solve nondifferentiable problems of finding saddle points with constraints. As in mathematical programming, it is shown that the method has the advantages of both Fiacco and McCormick exterior and interior penalty methods (Ref. 2). Under mild assumptions, the method has the desirable property that all trial solutions become feasible after a finite number of iterations. The rate of convergence is also presented. It should be noted that the results presented here have been obtained without making any use of differentiability assumptions.
This paper considers some convergence aspects of an optimization algorithm, whose basic idea is c... more This paper considers some convergence aspects of an optimization algorithm, whose basic idea is closely related to penalty and augmented Lagrangian methods, proposed by Kort and Bertsekas in 1972. We prove, without convexity assumptions, that the algorithm has a parametrically superlinear root convergence rate. We also give a partial global convergence result for the algorithm considered.
Journal of Global Optimization, 2010
We consider a generalized equilibrium problem involving DC functions which is called (GEP). For t... more We consider a generalized equilibrium problem involving DC functions which is called (GEP). For this problem we establish two new dual formulations based on Toland-Fenchel-Lagrange duality for DC programming problems. The first one allows us to obtain a unified dual analysis for many interesting problems. So, this dual coincides with the dual problem proposed by Martinez-Legaz and Sosa (J Glob Optim 25:311–319, 2006) for equilibrium problems in the sense of Blum and Oettli. Furthermore it is equivalent to Mosco’s dual problem (Mosco in J Math Anal Appl 40:202–206, 1972) when applied to a variational inequality problem. The second dual problem generalizes to our problem another dual scheme that has been recently introduced by Jacinto and Scheimberg (Optimization 57:795–805, 2008) for convex equilibrium problems. Through these schemes, as by products, we obtain new optimality conditions for (GEP) and also, gap functions for (GEP), which cover the ones in Antangerel et al. (J Oper Res 24:353–371, 2007, Pac J Optim 2:667–678, 2006) for variational inequalities and standard convex equilibrium problems. These results, in turn, when applied to DC and convex optimization problems with convex constraints (considered as special cases of (GEP)) lead to Toland-Fenchel-Lagrange duality for DC problems in Dinh et al. (Optimization 1–20, 2008, J Convex Anal 15:235–262, 2008), Fenchel-Lagrange and Lagrange dualities for convex problems as in Antangerel et al. (Pac J Optim 2:667–678, 2006), Bot and Wanka (Nonlinear Anal to appear), Jeyakumar et al. (Applied Mathematics research report AMR04/8, 2004). Besides, as consequences of the main results, we obtain some new optimality conditions for DC and convex problems.
Mathematical Programming, 2009
We present a bundle method for solving nonsmooth convex equilibrium problems based on the auxilia... more We present a bundle method for solving nonsmooth convex equilibrium problems based on the auxiliary problem principle. First, we consider a general algorithm that we prove to be convergent. Then we explain how to make this algorithm implementable. The strategy is to approximate the nonsmooth convex functions by piecewise linear convex functions in such a way that the subproblems are easy to solve and the convergence is preserved. In particular, we introduce a stopping criterion which is satisfied after finitely many iterations and which gives rise to Δ-stationary points. Finally, we apply our implementable algorithm for solving the particular case of singlevalued and multivalued variational inequalities and we find again the results obtained recently by Salmon et al. [18].
Journal of Optimization Theory and Applications, 2004
We consider an extension of the auxiliary problem principle for solving a general variational ine... more We consider an extension of the auxiliary problem principle for solving a general variational inequality problem. This problem consists in finding a zero of the sum of two operators defined on a real Hilbert space H: the first is a monotone single-valued operator; the second is the subdifferential of a lower semicontinuous proper convex function ϕ. To make the subproblems easier to solve, we consider two kinds of lower approximations for the function ϕ: a smooth approximation and a piecewise linear convex approximation. We explain how to construct these approximations and we prove the weak convergence and the strong convergence of the sequence generated by the corresponding algorithms under a pseudo Dunn condition on the single-valued operator. Finally, we report some numerical experiences to illustrate the behavior of the two algorithms.
Siam Journal on Control and Optimization, 1991
Journal of Optimization Theory and Applications, 1979
Recently, Kort and Bertsekas (Ref. 1) and Hartman (Ref. 2) presented independently a new penalty ... more Recently, Kort and Bertsekas (Ref. 1) and Hartman (Ref. 2) presented independently a new penalty function algorithm of exponential type for solving inequality-constrained minimization problems. The main purpose of this work is to give a proof on the rate of convergence of a modification of the exponential penalty method proposed by these authors. We show that the sequence of points generated by the modified algorithm converges to the solution of the original nonconvex problem linearly and that the sequence of estimates of the optimal Lagrange multiplier converges to this multiplier superlinearly. The question of convergence of the modified method is discussed. The present paper hinges on ideas of Mangasarian (Ref. 3), but the case considered here is not covered by Mangasarian's theory.
Journal of Optimization Theory and Applications, 2005
We apply the Banach contraction-mapping fixed-point principle for solving multivalued strongly mo... more We apply the Banach contraction-mapping fixed-point principle for solving multivalued strongly monotone variational inequalities. Then, we couple this algorithm with the proximal-point method for solving monotone multivalued variational inequalities. We prove the convergence rate of this algorithm and report some computational results.
Journal of Optimization Theory and Applications, 2000
Many algorithms for solving variational inequality problems can be derived from the auxiliary pro... more Many algorithms for solving variational inequality problems can be derived from the auxiliary problem principle introduced several years ago by Cohen. In recent years, the convergence of these algorithms has been established under weaker and weaker monotonicity assumptions: strong (pseudo) monotonicity has been replaced by the (pseudo) Dunn property. Moreover, well-suited assumptions have given rise to local versions of these results. In this paper, we combine the auxiliary problem principle with epiconvergence theory to present and study a basic family of perturbed methods for solving general variational inequalities. For example, this framework allows us to consider barrier functions and interior approximations of feasible domains. Our aim is to emphasize the global or local assumptions to be satisfied by the perturbed functions in order to derive convergence results similar to those without perturbations. In particular, we generalize previous results obtained by Makler-Scheimberg et al.
Journal of Global Optimization, 2009
In this article we present a new and efficient method for solving equilibrium problems on polyhed... more In this article we present a new and efficient method for solving equilibrium problems on polyhedra. The method is based on an interior-quadratic proximal term which replaces the usual quadratic proximal term. This leads to an interior proximal type algorithm. Each iteration consists in a prediction step followed by a correction step as in the extragradient method. In a first algorithm each of these steps is obtained by solving an unconstrained minimization problem, while in a second algorithm the correction step is replaced by an Armijo-backtracking linesearch followed by an hyperplane projection step. We prove that our algorithms are convergent under mild assumptions: pseudomonotonicity for the two algorithms and a Lipschitz property for the first one. Finally we present some numerical experiments to illustrate the behavior of the proposed algorithms.
Siam Journal on Optimization, 2004