linprog(method=’revised simplex’) — SciPy v1.15.2 Manual (original) (raw)
scipy.optimize.linprog(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None, bounds=(0, None), method='highs', callback=None, options=None, x0=None, integrality=None)
Linear programming: minimize a linear objective function subject to linear equality and inequality constraints using the revised simplex method.
Deprecated since version 1.9.0: method=’revised simplex’ will be removed in SciPy 1.11.0. It is replaced by method=’highs’ because the latter is faster and more robust.
Linear programming solves problems of the following form:
\[\begin{split}\min_x \ & c^T x \\ \mbox{such that} \ & A_{ub} x \leq b_{ub},\\ & A_{eq} x = b_{eq},\\ & l \leq x \leq u ,\end{split}\]
where \(x\) is a vector of decision variables; \(c\),\(b_{ub}\), \(b_{eq}\), \(l\), and \(u\) are vectors; and\(A_{ub}\) and \(A_{eq}\) are matrices.
Alternatively, that’s:
minimize:
such that:
A_ub @ x <= b_ub A_eq @ x == b_eq lb <= x <= ub
Note that by default lb = 0
and ub = None
unless specified withbounds
.
Parameters:
c1-D array
The coefficients of the linear objective function to be minimized.
A_ub2-D array, optional
The inequality constraint matrix. Each row of A_ub
specifies the coefficients of a linear inequality constraint on x
.
b_ub1-D array, optional
The inequality constraint vector. Each element represents an upper bound on the corresponding value of A_ub @ x
.
A_eq2-D array, optional
The equality constraint matrix. Each row of A_eq
specifies the coefficients of a linear equality constraint on x
.
b_eq1-D array, optional
The equality constraint vector. Each element of A_eq @ x
must equal the corresponding element of b_eq
.
boundssequence, optional
A sequence of (min, max)
pairs for each element in x
, defining the minimum and maximum values of that decision variable. Use None
to indicate that there is no bound. By default, bounds are(0, None)
(all decision variables are non-negative). If a single tuple (min, max)
is provided, then min
andmax
will serve as bounds for all decision variables.
methodstr
This is the method-specific documentation for ‘revised simplex’.‘highs’,‘highs-ds’,‘highs-ipm’,‘interior-point’ (default), and ‘simplex’ (legacy) are also available.
callbackcallable, optional
Callback function to be executed once per iteration.
x01-D array, optional
Guess values of the decision variables, which will be refined by the optimization algorithm. This argument is currently used only by the ‘revised simplex’ method, and can only be used if x0 represents a basic feasible solution.
Returns:
resOptimizeResult
A scipy.optimize.OptimizeResult consisting of the fields:
x1-D array
The values of the decision variables that minimizes the objective function while satisfying the constraints.
funfloat
The optimal value of the objective function c @ x
.
slack1-D array
The (nominally positive) values of the slack variables,b_ub - A_ub @ x
.
con1-D array
The (nominally zero) residuals of the equality constraints,b_eq - A_eq @ x
.
successbool
True
when the algorithm succeeds in finding an optimal solution.
statusint
An integer representing the exit status of the algorithm.
0
: Optimization terminated successfully.
1
: Iteration limit reached.
2
: Problem appears to be infeasible.
3
: Problem appears to be unbounded.
4
: Numerical difficulties encountered.
5
: Problem has no constraints; turn presolve on.
6
: Invalid guess provided.
messagestr
A string descriptor of the exit status of the algorithm.
nitint
The total number of iterations performed in all phases.
Options:
——-
maxiterint (default: 5000)
The maximum number of iterations to perform in either phase.
dispbool (default: False)
Set to True
if indicators of optimization status are to be printed to the console each iteration.
presolvebool (default: True)
Presolve attempts to identify trivial infeasibilities, identify trivial unboundedness, and simplify the problem before sending it to the main solver. It is generally recommended to keep the default setting True
; set to False
if presolve is to be disabled.
tolfloat (default: 1e-12)
The tolerance which determines when a solution is “close enough” to zero in Phase 1 to be considered a basic feasible solution or close enough to positive to serve as an optimal solution.
autoscalebool (default: False)
Set to True
to automatically perform equilibration. Consider using this option if the numerical values in the constraints are separated by several orders of magnitude.
rrbool (default: True)
Set to False
to disable automatic redundancy removal.
maxupdateint (default: 10)
The maximum number of updates performed on the LU factorization. After this many updates is reached, the basis matrix is factorized from scratch.
mastbool (default: False)
Minimize Amortized Solve Time. If enabled, the average time to solve a linear system using the basis factorization is measured. Typically, the average solve time will decrease with each successive solve after initial factorization, as factorization takes much more time than the solve operation (and updates). Eventually, however, the updated factorization becomes sufficiently complex that the average solve time begins to increase. When this is detected, the basis is refactorized from scratch. Enable this option to maximize speed at the risk of nondeterministic behavior. Ignored if maxupdate
is 0.
pivot“mrc” or “bland” (default: “mrc”)
Pivot rule: Minimum Reduced Cost (“mrc”) or Bland’s rule (“bland”). Choose Bland’s rule if iteration limit is reached and cycling is suspected.
unknown_optionsdict
Optional arguments not used by this particular solver. If_unknown_options_ is non-empty a warning is issued listing all unused options.
Notes
Method revised simplex uses the revised simplex method as described in[9], except that a factorization [11] of the basis matrix, rather than its inverse, is efficiently maintained and used to solve the linear systems at each iteration of the algorithm.
References
[9]
Bertsimas, Dimitris, and J. Tsitsiklis. “Introduction to linear programming.” Athena Scientific 1 (1997): 997.
[11]
Bartels, Richard H. “A stabilization of the simplex method.” Journal in Numerische Mathematik 16.5 (1971): 414-434.