Symmetric rank-one (original) (raw)

From Wikipedia, the free encyclopedia

The Symmetric Rank 1 (SR1) method is a quasi-Newton method to update the second derivative (Hessian) based on the derivatives (gradients) calculated at two points. It is a generalization to the secant method for a multidimensional problem. This update maintains the symmetry of the matrix but does not guarantee that the update be positive definite.

The sequence of Hessian approximations generated by the SR1 method converges to the true Hessian under mild conditions, in theory; in practice, the approximate Hessians generated by the SR1 method show faster progress towards the true Hessian than do popular alternatives (BFGS or DFP), in preliminary numerical experiments.[1][2] The SR1 method has computational advantages for sparse or partially separable problems.[3]

A twice continuously differentiable function x ↦ f ( x ) {\displaystyle x\mapsto f(x)} {\displaystyle x\mapsto f(x)} has a gradient ( ∇ f {\displaystyle \nabla f} {\displaystyle \nabla f}) and Hessian matrix B {\displaystyle B} {\displaystyle B}: The function f {\displaystyle f} {\displaystyle f} has an expansion as a Taylor series at x 0 {\displaystyle x_{0}} {\displaystyle x_{0}}, which can be truncated

f ( x 0 + Δ x ) ≈ f ( x 0 ) + ∇ f ( x 0 ) T Δ x + 1 2 Δ x T B Δ x {\displaystyle f(x_{0}+\Delta x)\approx f(x_{0})+\nabla f(x_{0})^{T}\Delta x+{\frac {1}{2}}\Delta x^{T}{B}\Delta x} {\displaystyle f(x_{0}+\Delta x)\approx f(x_{0})+\nabla f(x_{0})^{T}\Delta x+{\frac {1}{2}}\Delta x^{T}{B}\Delta x};

its gradient has a Taylor-series approximation also

∇ f ( x 0 + Δ x ) ≈ ∇ f ( x 0 ) + B Δ x {\displaystyle \nabla f(x_{0}+\Delta x)\approx \nabla f(x_{0})+B\Delta x} {\displaystyle \nabla f(x_{0}+\Delta x)\approx \nabla f(x_{0})+B\Delta x},

which is used to update B {\displaystyle B} {\displaystyle B}. The above secant-equation need not have a unique solution B {\displaystyle B} {\displaystyle B}. The SR1 formula computes (via an update of rank 1) the symmetric solution that is closest[_further explanation needed_] to the current approximate-value B k {\displaystyle B_{k}} {\displaystyle B_{k}}:

B k + 1 = B k + ( y k − B k Δ x k ) ( y k − B k Δ x k ) T ( y k − B k Δ x k ) T Δ x k {\displaystyle B_{k+1}=B_{k}+{\frac {(y_{k}-B_{k}\Delta x_{k})(y_{k}-B_{k}\Delta x_{k})^{T}}{(y_{k}-B_{k}\Delta x_{k})^{T}\Delta x_{k}}}} {\displaystyle B_{k+1}=B_{k}+{\frac {(y_{k}-B_{k}\Delta x_{k})(y_{k}-B_{k}\Delta x_{k})^{T}}{(y_{k}-B_{k}\Delta x_{k})^{T}\Delta x_{k}}}},

where

y k = ∇ f ( x k + Δ x k ) − ∇ f ( x k ) {\displaystyle y_{k}=\nabla f(x_{k}+\Delta x_{k})-\nabla f(x_{k})} {\displaystyle y_{k}=\nabla f(x_{k}+\Delta x_{k})-\nabla f(x_{k})}.

The corresponding update to the approximate inverse-Hessian H k = B k − 1 {\displaystyle H_{k}=B_{k}^{-1}} {\displaystyle H_{k}=B_{k}^{-1}} is

H k + 1 = H k + ( Δ x k − H k y k ) ( Δ x k − H k y k ) T ( Δ x k − H k y k ) T y k {\displaystyle H_{k+1}=H_{k}+{\frac {(\Delta x_{k}-H_{k}y_{k})(\Delta x_{k}-H_{k}y_{k})^{T}}{(\Delta x_{k}-H_{k}y_{k})^{T}y_{k}}}} {\displaystyle H_{k+1}=H_{k}+{\frac {(\Delta x_{k}-H_{k}y_{k})(\Delta x_{k}-H_{k}y_{k})^{T}}{(\Delta x_{k}-H_{k}y_{k})^{T}y_{k}}}}.

One might wonder why positive-definiteness is not preserved — after all, a rank-1 update of the form B k + 1 = B k + v v T {\displaystyle B_{k+1}=B_{k}+vv^{T}} {\displaystyle B_{k+1}=B_{k}+vv^{T}} is positive-definite if B k {\displaystyle B_{k}} {\displaystyle B_{k}} is. The explanation is that the update might be of the form B k + 1 = B k − v v T {\displaystyle B_{k+1}=B_{k}-vv^{T}} {\displaystyle B_{k+1}=B_{k}-vv^{T}} instead because the denominator can be negative, and in that case there are no guarantees about positive-definiteness.

The SR1 formula has been rediscovered a number of times. Since the denominator can vanish, some authors have suggested that the update be applied only if

| Δ x k T ( y k − B k Δ x k ) | ≥ r ‖ Δ x k ‖ ⋅ ‖ y k − B k Δ x k ‖ {\displaystyle |\Delta x_{k}^{T}(y_{k}-B_{k}\Delta x_{k})|\geq r\|\Delta x_{k}\|\cdot \|y_{k}-B_{k}\Delta x_{k}\|} {\displaystyle |\Delta x_{k}^{T}(y_{k}-B_{k}\Delta x_{k})|\geq r\|\Delta x_{k}\|\cdot \|y_{k}-B_{k}\Delta x_{k}\|},

where r ∈ ( 0 , 1 ) {\displaystyle r\in (0,1)} {\displaystyle r\in (0,1)} is a small number, e.g. 10 − 8 {\displaystyle 10^{-8}} {\displaystyle 10^{-8}}.[4]

The SR1 update maintains a dense matrix, which can be prohibitive for large problems. Similar to the L-BFGS method also a limited-memory SR1 (L-SR1) algorithm exists.[5] Instead of storing the full Hessian approximation, a L-SR1 method only stores the m {\displaystyle m} {\displaystyle m} most recent pairs { ( s i , y i ) } i = k − m k − 1 {\displaystyle \{(s_{i},y_{i})\}_{i=k-m}^{k-1}} {\displaystyle \{(s_{i},y_{i})\}_{i=k-m}^{k-1}}, where Δ x i := s i {\displaystyle \Delta x_{i}:=s_{i}} {\displaystyle \Delta x_{i}:=s_{i}} and m {\displaystyle m} {\displaystyle m} is an integer much smaller than the problem size ( m ≪ n {\displaystyle m\ll n} {\displaystyle m\ll n}). The limited-memory matrix is based on a compact matrix representation

B k = B 0 + J k N k − 1 J k T , J k = Y k − B 0 S k , N k = D k + L k + L k T − S k T B 0 S k {\displaystyle B_{k}=B_{0}+J_{k}N_{k}^{-1}J_{k}^{T},\quad J_{k}=Y_{k}-B_{0}S_{k},\quad N_{k}=D_{k}+L_{k}+L_{k}^{T}-S_{k}^{T}B_{0}S_{k}} {\displaystyle B_{k}=B_{0}+J_{k}N_{k}^{-1}J_{k}^{T},\quad J_{k}=Y_{k}-B_{0}S_{k},\quad N_{k}=D_{k}+L_{k}+L_{k}^{T}-S_{k}^{T}B_{0}S_{k}}

S k = [ s k − m s k − m + 1 … s k − 1 ] , {\displaystyle S_{k}={\begin{bmatrix}s_{k-m}&s_{k-m+1}&\ldots &s_{k-1}\end{bmatrix}},} {\displaystyle S_{k}={\begin{bmatrix}s_{k-m}&s_{k-m+1}&\ldots &s_{k-1}\end{bmatrix}},} Y k = [ y k − m y k − m + 1 … y k − 1 ] , {\displaystyle Y_{k}={\begin{bmatrix}y_{k-m}&y_{k-m+1}&\ldots &y_{k-1}\end{bmatrix}},} {\displaystyle Y_{k}={\begin{bmatrix}y_{k-m}&y_{k-m+1}&\ldots &y_{k-1}\end{bmatrix}},}

( L k ) i j = s i − 1 T y j − 1 , D k = s i − 1 T y i − 1 , k − m ≤ i ≤ k − 1 {\displaystyle {\big (}L_{k}{\big )}_{ij}=s_{i-1}^{T}y_{j-1},\quad D_{k}=s_{i-1}^{T}y_{i-1},\quad k-m\leq i\leq k-1} {\displaystyle {\big (}L_{k}{\big )}_{ij}=s_{i-1}^{T}y_{j-1},\quad D_{k}=s_{i-1}^{T}y_{i-1},\quad k-m\leq i\leq k-1}

Since the update can be indefinite, the L-SR1 algorithm is suitable for a trust-region strategy. Because of the limited-memory matrix, the trust-region L-SR1 algorithm scales linearly with the problem size, just like L-BFGS.

  1. ^ Conn, A. R.; Gould, N. I. M.; Toint, Ph. L. (March 1991). "Convergence of quasi-Newton matrices generated by the symmetric rank one update". Mathematical Programming. 50 (1). Springer Berlin/ Heidelberg: 177–195. doi:10.1007/BF01594934. ISSN 0025-5610. S2CID 28028770.
  2. ^ Khalfan, H. Fayez; et al. (1993). "A Theoretical and Experimental Study of the Symmetric Rank-One Update". SIAM Journal on Optimization. 3 (1): 1–24. doi:10.1137/0803001.
  3. ^ Byrd, Richard H.; et al. (1996). "Analysis of a Symmetric Rank-One Trust Region Method". SIAM Journal on Optimization. 6 (4): 1025–1039. doi:10.1137/S1052623493252985.
  4. ^ Nocedal, Jorge; Wright, Stephen J. (1999). Numerical Optimization. Springer. ISBN 0-387-98793-2.
  5. ^ Brust, J.; et al. (2017). "On solving L-SR1 trust-region subproblems". Computational Optimization and Applications. 66: 245–266. doi:10.1007/s10589-016-9868-3.