Mathematical programming via the least-squares method (original) (raw)

Exact and stable least squares solution to the linear programming problem

Central European Journal of Mathematics, 2005

A linear programming problem is transformed to the finding an element of polyhedron with the minimal norm. According to A.Cline [6], the problem is equivalent to the least squares problem on positive ortant. An orthogonal method for solving the problem is used. This method was presented earlier by the author and it is based on the highly developed least squares technique. First of all, the method is meant for solving unstable and degenerate problems. A new version of the artifical basis method (M-method) is presented. Also, the solving of linear inequality systems is considered.

Least Squares Solutions of Linear Inequality Systems

We discuss the problem of finding an approximate solution to an overdetermined system of linear inequalities, or an exact solution if the system is consistent. Theory and R code is provided for fouractive set methods for non-negatively constrained least squares, one uses alternating least squares, and one uses a nonsmooth Newton method.

Least squares optimization

The following is a brief review of least squares optimization and constrained optimization techniques. I assume the reader is familiar with basic linear algebra, including the Singular Value decomposition (as reviewed in my handout Geometric Review of Linear Algebra). Least squares (LS) problems are those in which the objective function may be expressed as a sum of squares. Such problems have a natural relationship to distances in Euclidean geometry, and the solutions may be computed analytically using the tools of linear algebra.

A numerically stable least squares solution to the quadratic programming problem

2008

The strictly convex quadratic programming problem is transformed to the least distance problem — finding the solution of minimum norm to the system of linear inequalities. This problem is equivalent to the linear least squares problem on the positive orthant. It is solved using orthogonal transformations, which are memorized as products. Like in the revised simplex method, an auxiliary matrix is used for computations. Compared to the modified-simplex type methods, the presented dual algorithm QPLS requires less storage and solves ill-conditioned problems more precisely. The algorithm is illustrated by some difficult problems.

On stable least squares solution to the system of linear inequalities

Central European Journal of Mathematics, 2007

The system of inequalities is transformed to the least squares problem on the positive ortant. This problem is solved using orthogonal transformations which are memorized as products. Author's previous paper presented a method where at each step all the coefficients of the system were transformed. This paper describes a method applicable also to large matrices. Like in revised simplex method, in this method an auxiliary matrix is used for the computations. The algorithm is suitable for unstable and degenerate problems primarily.

Solving the minimal least squares problem subject to bounds on the variables

BIT, 1984

A computational procedure is developed for determining the solution of minimal length to a linear least squares problem subject to bounds on the variables. In the first stage, a solution to the least squares problem is computed and then in the second stage, the solution of minimal length is determined. The objective function in each step is minimized by an active set method adapted to the special structure of the problem. The systems of linear equations satisfied by the descent direction and the Lagrange multipliers in the minimization algorithm are solved by direct methods based on QR decompositions or iterative preconditioned conjugate gradient methods. The direct and the iterative methods are compared in numerical experiments, where the solutions are sought to a sequence of related, minimal least squares problems subject to bounds on the variables. The application of the iterative methods to large, sparse problems is discussed briefly.

A linear programming-based optimization algorithm for solving nonlinear programming problems

European Journal of Operational Research, 2010

In this paper a linear programming-based optimization algorithm called the Sequential Cutting Plane algorithm is presented. The main features of the algorithm are described, convergence to a Karush-Kuhn-Tucker stationary point is proved and numerical experience on some well-known test sets is showed. The algorithm is based on an earlier version for convex inequality constrained problems, but here the algorithm is extended to general continuously differentiable nonlinear programming problems containing both nonlinear inequality and equality constraints. A comparison with some existing solvers shows that the algorithm is competitive with these solvers. Thus, this new method based on solving linear programming subproblems is a good alternative method for solving nonlinear programming problems efficiently. The algorithm has been used as a subsolver in a mixed integer nonlinear programming algorithm where the linear problems provide lower bounds on the optimal solutions of the nonlinear programming subproblems in the branch and bound tree for convex, inequality constrained problems. j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / e j o r conveniently provide a lower bound on the optimal solution of the convex NLP subproblem. Lower bounds are needed in the branch and bound procedure for efficiency reasons. See [21] for more details. Very promising results are reported in [20] for a special set of difficult block optimization problems. The MINLP version of the algorithm found better solutions in one minute compared to the solutions that other commercial solvers found in 12 h.

Sequential Coordinate-Wise Algorithm for the Non-negative Least Squares Problem

Lecture Notes in Computer Science, 2005

This report contributes to the solution of non-negative least squares problem (NLS). The NLS problem is a substantial part of a learning procedure of associative networks. First, stopping conditions suitable for iterative numerical algorithms solving the NLS problem are derived. The conditions allow to control the solution found in terms of optimized objective function. Second, a novel sequential coordinate-wise algorithm is proposed. The algorithm is easy to implement and showed promising performance on synthetical experiments conducted.

A least-squares primal-dual algorithm for solving linear programming problems

Operations Research Letters, 2002

We have developed a least-squares primal-dual algorithm for solving linear programming problems that is impervious to degeneracy, with strict improvement attained at each iteration. We tested our algorithm on a range of linear programming problems including ones that require many pivots, usually because of degeneracy, using the primal simplex method and the dual simplex method with steepest edge pivoting. On average, our algorithm took less than half the number of iterations required by the primal and dual simplex methods; on some problems, it took over 30 times fewer iterations.