A New Method Combining Interior and Exterior Approaches for Linear Programming (original) (raw)
Related papers
Station Cone Algorithm for Linear Programming
Journal of Mathematics and System Science, 2016
Recently we have proposed anew method combininginterior and exterior approaches to solve linear programming problems. This method uses an interior point, and from there connected to the vertex of the so called station cone which is also a solution of the dual problem. This allows us to determine the entering vector and the new station cone. Here in this paper, we present a new modified algorithm for the case, when at each iteration we determine a new interior point. The new building interior point moves toward the optimal vertex. Thanks to the shortened from both inside and outside, the new version allows to find quicker the optimal solution. The computational experiments show that the number of iterations of the new modified algorithm is significantly smaller than that of the second phase of the dual simplex method.
On Interior-Point Methods and Simplex Method in Linear Programming
Analele Stiintifice ale Universitatii Ovidius Constanta, Seria Matematica
In this paper we treat numerical computation methods for linear programming. Started from the analysis of the efficiency and defficiency of the simplex procedure, we present new possibilities offered by the interior-point methods, which appears from practical necessity, from the need of efficient means of solving large-scale problems. We realise the implementation in Java of the Karmarkar's method.
An interior point method for linear programming
The Journal of the Australian Mathematical Society. Series B. Applied Mathematics, 1990
Design of an interior point method for linear programming is discussed, and results of a simulation study reported. Emphasis is put on guessing the optimal vertex at as early a stage as possible.
A modified layered-step interior-point algorithm for linear programming
Mathematical Programming, 1998
The layered-step interior-point algorithm was introduced by Vavasis and Ye. The algorithm accelerates the path following interior-point algorithm and its arithmetic complexity depends only on the coefficient matrix A. The main drawback of the algorithm is the use of an unknown big constant ZA in computing the search direction and to initiate the algorithm. We propose a modified layered-step interior-point algorithm which does not use the big constant in computing the search direction. The constant is required only for initialization when a well-centered feasible solution is not available, and it is not required if an upper bound on the norm of a primal dual optimal solution is known in advance. The complexity of the simplified algorithm is the same as that of Vavasis and Ye.
A primal - dual exterior point algorithm for linear programming problems
Yugoslav Journal of Operations Research, 2009
The aim of this paper is to present a new simplex type algorithm for the Linear Programming Problem. The Primal-Dual method is a Simplex-type pivoting algorithm that generates two paths in order to converge to the optimal solution. The first path is primal feasible while the second one is dual feasible for the original problem. Specifically, we use a three-phase-implementation. The first two phases construct the required primal and dual feasible solutions, using the Primal Simplex algorithm. Finally, in the third phase the Primal-Dual algorithm is applied. Moreover, a computational study has been carried out, using randomly generated sparse optimal linear problems, to compare its computational efficiency with the Primal Simplex algorithm and also with MATLAB's Interior Point Method implementation. The algorithm appears to be very promising since it clearly shows its superiority to the Primal Simplex algorithm as well as its robustness over the IPM algorithm.
The Modified Primal-Dual Interior Point Method for Linear Optimization
In this paper, we describe a new primal-dual interior point method for finding search directions for linear optimization. The simplex method can be interpreted as a systematic procedure for approaching an optimal extreme point satisfying the Karush-Kuhn-Tucker (KKT) conditions. At each iteration, primal feasibility is sat-isfied. Also complementary slackness is always satisfied. Condition dual feasibility is partially violated during the iterations of the simplex method. The dual feasibility is violated, until of course, the optimal solution is reached. Therefore, we com-pute optimal solution by interior point method that was satisfied in KKT condition. KKT conditions are the nonlinear equations system. So, we reduce the premen-tioned system to the linear matrix system and change the coefficient matrix to the symmetric and positive definite matrix for applying the Cholesky factorization. We improve the (x, y, s) by computing the search-directions in different cases. Finally, the met...
Adapting the Interior Point Method for the Solution of Linear Programs on High Performance Computers
Computer Science and Operations Research, 1992
In this paper we describe a unified algorithmic framework for the interior point method (IPM) of solving Linear Programs (LPs) which allows us to adapt it over a range of high performance computer architectures. We set out the reasons as to why IPM makes better use of high performance computer architecture than the sparse simplex method. In the inner iteration of the IPM a search direction is computed using Newton or higher order methods. Computationally this involves solving a sparse symmetric positive definite (SSPD) system of equations. The choice of direct and indirect methods for the solution of this system and the design of data structures to take advantage of coarse grain parallel and massively parallel computer architectures are considered in detail. Finally, we present experimental results of solving NETLIB test problems on examples of these architectures and put forward arguments as to why integration of the system within sparse simplex is beneficial. 2. Sparse Simplex and Interior Point Method: Hardware Platforms Progress in the solution of large LPs has been achieved in three ways, namely hardware, software and algorithmic developments. Most of the developments during the 70's and early 80's in the Sparse Simplex method were based on serial computer architecture. The main thrust of these developments were towards exploiting sparsity and finding methods which either reduced simplex iterations or reduced the computational work in each iteration [BIXBY91, M1TAMZ91]. In general these algorithmic and software developments of the sparse simplex method cannot be readily extended to parallel computers. In contrast the interior point methods which have proven to be robust and competitive appear to be better positioned to make use of newly emerging high * The primal-dual algorithm converges to the optimal solution in at most O (n1/2 L) iterations [MONADL89] where n denotes the dimension of the problems and L the input size. It
An improving procedure of the interior projective method for linear programming
Applied Mathematics and Computation, 2008
We propose in this study, a practical modification of Karmarkar's projective algorithm for linear programming problems. This modification leads to a considerable reduction of the cost and the number of iterations. This claim is confirmed by many interesting numerical experimentations.
An efficient search direction for linear programming problems
Computers & Operations Research, 2002
In this paper, we present an auxiliary algorithm, in terms of the speed of obtaining the optimal solution, that is e!ective in helping the simplex method for commencing a better initial basic feasible solution. The idea of choosing a direction towards an optimal point presented in this paper is new and easily implemented. From our experiments, the algorithm will release a corner point of the feasible region within few iterative steps, independent of the starting point. The computational results show that after the auxiliary algorithm is adopted as phase I process, the simplex method consistently reduce the number of required iterations by about 40%.