Newton and interior-point methods for (constrained) nonconvex-nonconcave minmax optimization with stability and instability guarantees (original) (raw)


Sunto In questo lavoro, descriviamo una variante del metodo di Punto Interno di Newton in [8] per problemi di programmazione nonlineari. In questo schema, il parametro di perturbazione può essere scelto in un range di valori e possiamo usare un metodo iterativo per risolvere approssimativamente il sistema lineare ridotto derivante ad ogni step. Abbiamo individuato la regola di terminazione interna che

summary:In this paper, we propose a primal interior-point method for large sparse generalized minimax optimization. After a short introduction, where the problem is stated, we introduce the basic equations of the Newton method applied to the KKT conditions and propose a primal interior-point method. (i. e. interior point method that uses explicitly computed approximations of Lagrange multipliers instead of their updates). Next we describe the basic algorithm and give more details concerning its implementation covering numerical differentiation, variable metric updates, and a barrier parameter decrease. Using standard weak assumptions, we prove that this algorithm is globally convergent if a bounded barrier is used. Then, using stronger assumptions, we prove that it is globally convergent also for the logarithmic barrier. Finally, we present results of computational experiments confirming the efficiency of the primal interior point method for special cases of generalized minimax prob...

This paper proposes and justifies two new globally convergent Newton-type methods to solve unconstrained and constrained problems of nonsmooth optimization by using tools of variational analysis and generalized differentiation. Both methods are coderivative-based and employ generalized Hessians (coderivatives of subgradient mappings) associated with objective functions, which are either of class C1,1, or are represented in the form of convex composite optimization, where one of the terms may be extended-real-valued. The proposed globally convergent algorithms are of two types. The first one extends the damped Newton method and requires positive-definiteness of the generalized Hessians for its well-posedness and efficient performance, while the other algorithm is of the Levenberg-Marquardt type being well-defined when the generalized Hessians are merely positive-semidefinite. The obtained convergence rates for both methods are at least linear, but becomes superlinear under the so-cal...