nnls — SciPy v1.15.3 Manual (original) (raw)
scipy.optimize.
scipy.optimize.nnls(A, b, maxiter=None, *, atol=None)[source]#
Solve argmin_x || Ax - b ||_2
for x>=0
.
This problem, often called as NonNegative Least Squares, is a convex optimization problem with convex constraints. It typically arises when the x
models quantities for which only nonnegative values are attainable; weight of ingredients, component costs and so on.
Parameters:
A(m, n) ndarray
Coefficient array
b(m,) ndarray, float
Right-hand side vector.
maxiter: int, optional
Maximum number of iterations, optional. Default value is 3 * n
.
atol: float
Tolerance value used in the algorithm to assess closeness to zero in the projected residual (A.T @ (A x - b)
entries. Increasing this value relaxes the solution constraints. A typical relaxation value can be selected as max(m, n) * np.linalg.norm(a, 1) * np.spacing(1.)
. This value is not set as default since the norm operation becomes expensive for large problems hence can be used only when necessary.
Returns:
xndarray
Solution vector.
rnormfloat
The 2-norm of the residual, || Ax-b ||_2
.
See also
Linear least squares with bounds on the variables
Notes
The code is based on [2] which is an improved version of the classical algorithm of [1]. It utilizes an active set method and solves the KKT (Karush-Kuhn-Tucker) conditions for the non-negative least squares problem.
References
Examples
import numpy as np from scipy.optimize import nnls ... A = np.array([[1, 0], [1, 0], [0, 1]]) b = np.array([2, 1, 1]) nnls(A, b) (array([1.5, 1. ]), 0.7071067811865475)
b = np.array([-1, -1, -1]) nnls(A, b) (array([0., 0.]), 1.7320508075688772)