lsq_linear — SciPy v1.15.3 Manual (original) (raw)

scipy.optimize.

scipy.optimize.lsq_linear(A, b, bounds=(-inf, inf), method='trf', tol=1e-10, lsq_solver=None, lsmr_tol=None, max_iter=None, verbose=0, *, lsmr_maxiter=None)[source]#

Solve a linear least-squares problem with bounds on the variables.

Given a m-by-n design matrix A and a target vector b with m elements,lsq_linear solves the following optimization problem:

minimize 0.5 * ||A x - b||**2 subject to lb <= x <= ub

This optimization problem is convex, hence a found minimum (if iterations have converged) is guaranteed to be global.

Parameters:

Aarray_like, sparse matrix of LinearOperator, shape (m, n)

Design matrix. Can be scipy.sparse.linalg.LinearOperator.

barray_like, shape (m,)

Target vector.

bounds2-tuple of array_like or Bounds, optional

Lower and upper bounds on parameters. Defaults to no bounds. There are two ways to specify the bounds:

method‘trf’ or ‘bvls’, optional

Method to perform minimization.

Default is ‘trf’.

tolfloat, optional

Tolerance parameter. The algorithm terminates if a relative change of the cost function is less than tol on the last iteration. Additionally, the first-order optimality measure is considered:

lsq_solver{None, ‘exact’, ‘lsmr’}, optional

Method of solving unbounded least-squares problems throughout iterations:

If None (default), the solver is chosen based on type of A.

lsmr_tolNone, float or ‘auto’, optional

Tolerance parameters ‘atol’ and ‘btol’ for scipy.sparse.linalg.lsmrIf None (default), it is set to 1e-2 * tol. If ‘auto’, the tolerance will be adjusted based on the optimality of the current iterate, which can speed up the optimization process, but is not always reliable.

max_iterNone or int, optional

Maximum number of iterations before termination. If None (default), it is set to 100 for method='trf' or to the number of variables formethod='bvls' (not counting iterations for ‘bvls’ initialization).

verbose{0, 1, 2}, optional

Level of algorithm’s verbosity:

lsmr_maxiterNone or int, optional

Maximum number of iterations for the lsmr least squares solver, if it is used (by setting lsq_solver='lsmr'). If None (default), it uses lsmr’s default of min(m, n) where m and n are the number of rows and columns of A, respectively. Has no effect iflsq_solver='exact'.

Returns:

OptimizeResult with the following fields defined:

xndarray, shape (n,)

Solution found.

costfloat

Value of the cost function at the solution.

funndarray, shape (m,)

Vector of residuals at the solution.

optimalityfloat

First-order optimality measure. The exact meaning depends on method, refer to the description of tol parameter.

active_maskndarray of int, shape (n,)

Each component shows whether a corresponding constraint is active (that is, whether a variable is at the bound):

Might be somewhat arbitrary for the trf method as it generates a sequence of strictly feasible iterates and active_mask is determined within a tolerance threshold.

unbounded_soltuple

Unbounded least squares solution tuple returned by the least squares solver (set with lsq_solver option). If lsq_solver is not set or is set to 'exact', the tuple contains an ndarray of shape (n,) with the unbounded solution, an ndarray with the sum of squared residuals, an int with the rank of A, and an ndarray with the singular values of A (see NumPy’s linalg.lstsq for more information). If_lsq_solver_ is set to 'lsmr', the tuple contains an ndarray of shape (n,) with the unbounded solution, an int with the exit code, an int with the number of iterations, and five floats with various norms and the condition number of A (see SciPy’ssparse.linalg.lsmr for more information). This output can be useful for determining the convergence of the least squares solver, particularly the iterative 'lsmr' solver. The unbounded least squares problem is to minimize 0.5 * ||A x - b||**2.

nitint

Number of iterations. Zero if the unconstrained solution is optimal.

statusint

Reason for algorithm termination:

messagestr

Verbal description of the termination reason.

successbool

True if one of the convergence criteria is satisfied (status > 0).

See also

nnls

Linear least squares with non-negativity constraint.

least_squares

Nonlinear least squares with bounds on the variables.

Notes

The algorithm first computes the unconstrained least-squares solution bynumpy.linalg.lstsq or scipy.sparse.linalg.lsmr depending on_lsq_solver_. This solution is returned as optimal if it lies within the bounds.

Method ‘trf’ runs the adaptation of the algorithm described in [STIR] for a linear least-squares problem. The iterations are essentially the same as in the nonlinear least-squares algorithm, but as the quadratic function model is always accurate, we don’t need to track or modify the radius of a trust region. The line search (backtracking) is used as a safety net when a selected step does not decrease the cost function. Read more detailed description of the algorithm in scipy.optimize.least_squares.

Method ‘bvls’ runs a Python implementation of the algorithm described in[BVLS]. The algorithm maintains active and free sets of variables, on each iteration chooses a new variable to move from the active set to the free set and then solves the unconstrained least-squares problem on free variables. This algorithm is guaranteed to give an accurate solution eventually, but may require up to n iterations for a problem with n variables. Additionally, an ad-hoc initialization procedure is implemented, that determines which variables to set free or active initially. It takes some number of iterations before actual BVLS starts, but can significantly reduce the number of further iterations.

References

[STIR]

M. A. Branch, T. F. Coleman, and Y. Li, “A Subspace, Interior, and Conjugate Gradient Method for Large-Scale Bound-Constrained Minimization Problems,” SIAM Journal on Scientific Computing, Vol. 21, Number 1, pp 1-23, 1999.

[BVLS]

P. B. Start and R. L. Parker, “Bounded-Variable Least-Squares: an Algorithm and Applications”, Computational Statistics, 10, 129-141, 1995.

Examples

In this example, a problem with a large sparse matrix and bounds on the variables is solved.

import numpy as np from scipy.sparse import rand from scipy.optimize import lsq_linear rng = np.random.default_rng() ... m = 2000 n = 1000 ... A = rand(m, n, density=1e-4, random_state=rng) b = rng.standard_normal(m) ... lb = rng.standard_normal(n) ub = lb + 1 ... res = lsq_linear(A, b, bounds=(lb, ub), lsmr_tol='auto', verbose=1) The relative change of the cost function is less than tol. Number of iterations 10, initial cost 1.0070e+03, final cost 9.6602e+02, first-order optimality 2.21e-09. # may vary