Applications of Second-Order Cone Programming (original) (raw)

The method for second order cone programming

Computers & Operations Research, 2008

We develop the Q method for the second order cone programming problem. The algorithm is the adaptation of the Q method for semidefinite programming originally developed by Alizadeh, Haeberly and Overton [A new primal-dual interior point method for semidefinite programming. In: Proceedings of the fifth SIAM conference on applications of linear algebra, Snowbird, Utah, 1994.] and [Primal-dual interior-point methods for semidefinite programming: convergence rates, stability and numerical results. SIAM Journal on Optimization 1998;8(3):746-68 [electronic].]. We take advantage of the special algebraic structure associated with second order cone programs to formulate the Q method. Furthermore we discuss the convergence properties of the algorithm. Finally, some numerical results are presented. ᭧

A Polynomial Primal-Dual Path-Following Algorithm for Second-order Cone Programming

Second-order cone programming (SOCP) is the problem of minimizing linear objective function over cross-section of second-order cones and an ane space. Recently this problem gets more attention because of its various important applications including quadratically constrained convex quadratic programming. In this paper we deal with a primal-dual path-following algorithm for SOCP to show many of the ideas developed for primal-dual algorithms for LP and SDP carry over to this problem. We dene neighborhoods of the central trajectory in terms of the \eigenvalues" of the second-order cone, and develop an analogue of HRVW/KSH/M direction, and establish O( p n log " 01 ), O(n log " 01 ) and O(n 3 log " 01 ) iteration-complexity bounds for short-step, semilong-step and long-step path-following algorithms, respectively, to reduce the duality gap by a factor of ". keywords: second-order cone, interior-point methods, polynomial complexity, primal-dual path-following methods.

Interior-point methods for large-scale cone programming

In the conic formulation of a convex optimization problem the constraints are expressed as linear inequalities with respect to a possibly non-polyhedral convex cone. This makes it possible to formulate elegant extensions of interior-point methods for linear programming to general nonlinear convex optimization. Recent research on cone programming algorithms has particularly focused on three convex cones, for which symmetric primal-dual methods have been developed: the nonnegative orthant, the second-order cone, and the positive semidefinite matrix cone. Although not all convex constraints can be expressed in terms of the three standard cones, cone programs associated with these cones are sufficiently general to serve as the basis of convex modeling packages. They are also widely used in machine learning. The main difficulty in the implementation of interior-point methods for cone programming is the complexity of the linear equations that need to be solved at each iteration. These equations are usually dense, unlike the equations that arise in linear programming, and it is therefore difficult to develop general-purpose strategies for exploiting problem structure based solely on sparse matrix methods. In this chapter we give an overview of ad hoc techniques that can be used to exploit non-sparse structure in specific classes of applications. We illustrate the methods with examples from machine learning and present numerical results with CVXOPT, a software

The Q Method for Symmetric Cone Programming

Journal of Optimization Theory and Applications, 2011

We extend the Q method to the symmetric cone programming. An infeasible interior point algorithm and a Newton-type algorithm are given. We give convergence results of the interior point algorithm and prove that the Newton-type algorithm is good for "warm starting".

Full Nesterov-Todd Step Primal-Dual Interior-Point Methods for Second-Order Cone Optimization

After a brief introduction to Jordan algebras, we present a primal-dual interior-point algorithm for second-order conic optimization that uses full Nesterov-Todd-steps; no line searches are required. The number of iterations of the algorithm is O( √ N log(N/ε), where N stands for the number of second-order cones in the problem formulation and ε is the desired accuracy. The bound coincides with the currently best iteration bound for secondorder conic optimization. We also generalize an infeasible interior-point method for linear optimization [26] to second-order conic optimization. As usual for infeasible interior-point methods the starting point depends on a positive number ζ. The algorithm either finds an ε-solution in at most O (N log(N/ε)) steps or determines that the primal-dual problem pair has no optimal solution with vanishing duality gap satisfying a condition in terms of ζ.

A Different Approach to Cone-Convex Optimization

2013

In classical convex optimization theory, the Karush-Kuhn-Tucker (KKT) optimality conditions are necessary and sufficient for optimality if the objective as well as the constraint functions involved is convex. Recently, Lassere [1] considered a scalar programming problem and showed that if the convexity of the constraint functions is replaced by the convexity of the feasible set, this crucial feature of convex programming can still be preserved. In this paper, we generalize his results by making them applicable to vector optimization problems (VOP) over cones. We consider the minimization of a cone-convex function over a convex feasible set described by cone constraints that are not necessarily cone-convex. We show that if a Slater-type cone constraint qualification holds, then every weak minimizer of (VOP) is a KKT point and conversely every KKT point is a weak minimizer. Further a Mond-Weir type dual is formulated in the modified situation and various duality results are established.

Primal-dual first-order methods with {\mathcal {O}(1/\epsilon)}$$ iteration-complexity for cone programming

Mathematical Programming, 2011

In this paper we consider the general cone programming problem, and propose primal-dual convex (smooth and/or nonsmooth) minimization reformulations for it. We then discuss first-order methods suitable for solving these reformulations, namely, Nesterov's optimal method (Nesterov in Doklady AN SSSR 269:543-547, 1983; Math Program 103:127-152, 2005), Nesterov's smooth approximation scheme (Nesterov in Math Program 103:127-152, 2005), and Nemirovski's prox-method (Nemirovski in SIAM J Opt 15: 2005), and propose a variant of Nesterov's optimal method which has outperformed the latter one in our computational experiments. We also derive iteration-complexity bounds for these first-order methods applied to the proposed primal-dual reformulations of the cone programming problem. The performance of these methods is then compared using a set of randomly generated linear programming and semidefinite programming instances. We also compare the approach based on the variant of Nesterov's optimal method with the low-rank method proposed by Burer and Monteiro (Math Program Ser B 95:329-357, for solving a set of randomly generated SDP instances.

On implementing a primal-dual interior-point method for conic quadratic optimization

Mathematical Programming, 2003

Conic quadratic optimization is the problem of minimizing a linear function subject to the intersection of an affine set and the product of quadratic cones. The problem is a convex optimization problem and has numerous applications in engineering, economics, and other areas of science. Indeed, linear and convex quadratic optimization is a special case.

USING LOQO TO SOLVE SECOND-ORDER CONE PROGRAMMING PROBLEMS

1998

Many nonlinear optimization problems can be cast as second-order cone programming problems. In this paper, we discuss a broad spectrum of such applications. For each application, we consider various formulations, some convex some not, and study which ones are amenable to solution using a general-purpose interior-point solver LOQO. We also compare with other commonly available nonlinear programming solvers and special-purpose codes for second-order cone programming.