Solving sparse polynomial optimization problems with chordal structure using the sparse, bounded-degree sum-of-squares hierarchy (original) (raw)

Sums of Squares and Semidefinite Program Relaxations for Polynomial Optimization Problems with Structured Sparsity

SIAM Journal on Optimization, 2006

Unconstrained and inequality constrained sparse polynomial optimization problems (POPs) are considered. A correlative sparsity pattern graph is defined to find a certain sparse structure in the objective and constraint polynomials of a POP. Based on this graph, sets of supports for sums of squares (SOS) polynomials that lead to efficient SOS and semidefinite programming (SDP) relaxations are obtained. Numerical results from various test problems are included to show the improved performance of the SOS and SDP relaxations.

Generalized Lagrangian Duals and Sums of Squares Relaxations of Sparse Polynomial Optimization Problems

SIAM Journal on Optimization, 2005

Sequences of generalized Lagrangian duals and their SOS (sums of squares of polynomials) relaxations for a POP (polynomial optimization problem) are introduced. Sparsity of polynomials in the POP is used to reduce the sizes of the Lagrangian duals and their SOS relaxations. It is proved that the optimal values of the Lagrangian duals in the sequence converge to the optimal value of the POP using a method from the penalty function approach. The sequence of SOS relaxations is transformed into a sequence of SDP (semidefinite program) relaxations of the POP, which correspond to duals of modification and generalization of SDP relaxations given by Lasserre for the POP.

Sum-of-Squares Hierarchies for Binary Polynomial Optimization

Integer Programming and Combinatorial Optimization, 2021

We consider the sum-of-squares hierarchy of approximations for the problem of minimizing a polynomial f over the boolean hypercube B n = {0, 1} n. This hierarchy provides for each integer r ∈ N a lower bound f (r) on the minimum fmin of f , given by the largest scalar λ for which the polynomial f −λ is a sum-of-squares on B n with degree at most 2r. We analyze the quality of these bounds by estimating the worst-case error fmin − f (r) in terms of the least roots of the Krawtchouk polynomials. As a consequence, for fixed t ∈ [0, 1/2], we can show that this worst-case error in the regime r ≈ t • n is of the order 1/2 − t(1 − t) as n tends to ∞. Our proof combines classical Fourier analysis on B n with the polynomial kernel technique and existing results on the extremal roots of Krawtchouk polynomials. This link to roots of orthogonal polynomials relies on a connection between the hierarchy of lower bounds f (r) and another hierarchy of upper bounds f (r) , for which we are also able to establish the same error analysis. Our analysis extends to the minimization of a polynomial over the q-ary cube (Z/qZ) n. Keywords: Binary polynomial optimization • Lasserre hierarchy • Sum-of-squares polynomials • Fourier analysis • Krawtchouk polynomials • Polynomial kernels • Semidefinite programming This optimization problem is NP-hard in general, already for d = 2. Indeed, as is well-known, one can model an instance of max-cut on the complete graph K n with edge weights w = (w ij) as a problem of the form (1) by setting: f (x) = − 1≤i<j≤n w ij (x i − x j) 2 ,

A numerical evaluation of the bounded degree sum-of-squares hierarchy of Lasserre, Toh, and Yang on the pooling problem

The bounded degree sum-of-squares (BSOS) hierarchy of Lasserre, Toh, and Yang [EURO J. Comput. Optim., 2015] constructs lower bounds for a general polynomial optimization problem with compact feasible set, by solving a sequence of semi-definite programming (SDP) problems. Lasserre, Toh, and Yang prove that these lower bounds converge to the optimal value of the original problem, under some assumptions. In this paper, we analyze the BSOS hierarchy and study its numerical performance on a specific class of bilinear programming problems, called pooling problems, that arise in the refinery and chemical process industries.

Perturbed sums-of-squares theorem for polynomial optimization and its applications

Optimization Methods and Software, 2015

We consider a property of positive polynomials on a compact set with a small perturbation. When applied to a Polynomial Optimization Problem (POP), the property implies that the optimal value of the corresponding SemiDefinite Programming (SDP) relaxation with sufficiently large relaxation order is bounded from below by (f * − ǫ) and from above by f * + ǫ(n + 1), where f * is the optimal value of the POP. We propose new SDP relaxations for POP based on modifications of existing sums-of-squares representation theorems. An advantage of our SDP relaxations is that in many cases they are of considerably smaller dimension than those originally proposed by Lasserre. We present some applications and the results of our computational experiments.

Convergent SDP‐Relaxations in Polynomial Optimization with Sparsity

SIAM Journal on Optimization, 2006

We consider a polynomial programming problem P on a compact semi-algebraic set K ⊂ R n , described by m polynomial inequalities g j (X) ≥ 0, and with criterion f ∈ R[X]. We propose a hierarchy of semidefinite relaxations in the spirit those of Waki et al. [9]. In particular, the SDP-relaxation of order r has the following two features: (a) The number of variables is O(κ 2r) where κ = max[κ 1 , κ 2 ] witth κ 1 (resp. κ 2) being the maximum number of variables appearing the monomials of f (resp. appearing in a single constraint g j (X) ≥ 0). (b) The largest size of the LMI's (Linear Matrix Inequalities) is O(κ r). This is to compare with the respective number of variables O(n 2r) and LMI size O(n r) in the original SDP-relaxations defined in [11]. Therefore, great computational savings are expected in case of sparsity in the data {g j , f }, i.e. when κ is small, a frequent case in practical applications of interest. The novelty with respect to [9] is that we prove convergence to the global optimum of P when the sparsity pattern satisfies a condition often encountered in large size problems of practical applications, and known as the running intersection property in graph theory. In such cases, and as a by-product, we also obtain a new representation result for polynomials positive on a basic closed semialgebraic set, a sparse version of Putinar's Positivstellensatz [16].

CS-TSSOS: Correlative and Term Sparsity for Large-Scale Polynomial Optimization

ACM Transactions on Mathematical Software

This work proposes a new moment-SOS hierarchy, called CS-TSSOS , for solving large-scale sparse polynomial optimization problems. Its novelty is to exploit simultaneously correlative sparsity and term sparsity by combining advantages of two existing frameworks for sparse polynomial optimization. The former is due to Waki et al. [ 40 ] while the latter was initially proposed by Wang et al. [ 42 ] and later exploited in the TSSOS hierarchy [ 46 , 47 ]. In doing so we obtain CS-TSSOS—a two-level hierarchy of semidefinite programming relaxations with (i) the crucial property to involve blocks of SDP matrices and (ii) the guarantee of convergence to the global optimum under certain conditions. We demonstrate its efficiency and scalability on several large-scale instances of the celebrated Max-Cut problem and the important industrial optimal power flow problem, involving up to six thousand variables and tens of thousands of constraints.

Decomposition-based Method for Sparse Semidefinite Relaxations of Polynomial Optimization Problems

Journal of Optimization Theory and Applications, 2010

We consider polynomial optimization problems pervaded by a sparsity pattern. It has been shown in [1, 2] that the optimal solution of a polynomial programming problem with structured sparsity can be computed by solving a series of semidefinite relaxations that possess the same kind of sparsity. We aim at solving the former relaxations with a decompositionbased method, which partitions the relaxations according to their sparsity pattern. The decomposition-based method that we propose is an extension to semidefinite programming of the Benders decomposition for linear programs [3] .