Ensemble Square Root Filters (original) (raw)

1. Introduction

Data assimilation addresses the problem of producing useful analyses and forecasts given imperfect dynamical models and observations. The Kalman filter is the optimal data assimilation method for linear dynamics with additive, state-independent Gaussian model and observation errors (Cohn 1997). An attractive feature of the Kalman filter is its calculation of forecast and analysis error covariances, in addition to the forecasts and analyses themselves. In this way, the Kalman filter produces estimates of forecast and analysis uncertainty, consistent with the dynamics and prescribed model and observation error statistics. However, the error covariance calculation components of the Kalman filter are difficult to implement in realistic systems because of (i) their computational cost, (ii) the nonlinearity of the dynamics, and (iii) poorly characterized error sources.

The ensemble Kalman filter (EnKF), proposed by Evensen (1994), addresses the first two of these problems by using ensemble representations for the forecast and analysis error covariances. Ensemble size limits the number of degrees of freedom used to represent forecast and analysis errors, and Kalman filter error covariance calculations are practical for modest-sized ensembles. The EnKF algorithm begins with an analysis ensemble whose mean is the current state estimate or analysis and whose statistics reflect the analysis error. Applying the full nonlinear dynamics to each analysis ensemble member produces the forecast ensemble; tangent linear and adjoint models of the dynamics are not required. Statistics of the forecast ensemble represent forecast errors; in its simplest form, the EnKF only accounts for forecast error due to uncertain initial conditions, neglecting forecast error due to model deficiencies. The forecast ensemble mean and covariance are then used to assimilate observations and compute a new analysis ensemble with appropriate statistics, and the cycle is repeated. The new analysis ensemble can be formed either stochastically (Houtekamer and Mitchell 1998; Burgers et al. 1998) or deterministically (Bishop et al. 2001; Anderson 2001; Whitaker and Hamill 2002). Deterministic methods were developed to address the adaptive observational network design problem and to avoid sampling issues associated with the use of β€œperturbed observations” in stochastic analysis ensemble update methods.

The EnKF and other ensemble data assimilation methods belong to the family of square root filters (SRFs), and a purpose of this paper is to demonstrate that deterministic analysis ensemble updates are implementations of Kalman SRFs (Bierman 1977; Maybeck 1982; Heemink et al. 2001). An immediate benefit of this identification is a framework for understanding and comparing deterministic analysis ensemble update schemes (Bishop et al. 2001; Anderson 2001; Whitaker and Hamill 2002). SRFs, like ensemble representations of covariances, are not unique. We begin our discussion in section 2 with a presentation of the Kalman SRF; issues related to implementation of ensemble SRFs are presented in section 3; in section 4 we summarize our results.

2. The Kalman SRF

Kalman SRF algorithms, originally developed for space-navigation systems with limited computational word length, demonstrate superior numerical precision and stability compared to the standard Kalman filter algorithm (Bierman 1977; Maybeck 1982). SRFs by construction avoid loss of positive definiteness of the error covariance matrices. SRFs have been used in earth science data assimilation methods where error covariances are approximated by truncated eigenvector expansions (Verlaan and Heemink 1997).

The usual Kalman filter covariance evolution equations are

i1520-0493-131-7-1485-e1

i1520-0493-131-7-1485-e1

where

𝗣f k

and

𝗣a k

are, respectively, the n Γ— n forecast and analysis error covariance matrices at time t k; 𝗠k is the tangent linear dynamics; 𝗛k is the p Γ— n observation operator; π—₯k is the p Γ— p observation error covariance; 𝗀k is the n Γ— n model error covariance matrix and π—žk ≑

𝗣f k_𝗛T_k

(𝗛k

𝗣f k_𝗛T_k

+ π—₯k)βˆ’1 is the Kalman gain; n is the dimension of the system state; and p is the number of observations. The error covariance evolution depends on the state estimates and observations through the tangent linear dynamics 𝗠k. The propagation of analysis errors by the dynamics with model error acting as a forcing is described by Eq. (1). Equation (2) shows how an optimal data assimilation scheme uses observations to produce an analysis whose error covariance is less than that of the forecast.

The forecast and analysis error covariance matrices are symmetric positive-definite matrices and can be represented as 𝗣f k = 𝗭f k_𝗭_f_T_k and 𝗣a k = 𝗭a k_𝗭_a_T_k, where the matrices 𝗭f k and 𝗭a k are matrix square roots of 𝗣f k and 𝗣a k, respectively; other matrix factorizations can be used in filters as well (Bierman 1977; Pham et al. 1998). A covariance matrix and its matrix square root have the same rank or number of nonzero singular values. When a covariance matrix 𝗣 has rank m, there is an n Γ— m matrix square root 𝗭 satisfying 𝗣 = 𝗭𝗭T; in low-rank covariance representations the rank m is much less than the state-space dimension n. This representation is not unique; 𝗣 can also be represented as 𝗣 = (𝗭𝗨)(𝗭𝗨)T, where the matrix 𝗨 is any m Γ— m orthogonal transformation 𝗨𝗨T = 𝗨T𝗨 = π—œ. The projection β€–xT𝗭‖2 = xT𝗣x of an arbitrary _n_-vector x onto the matrix square root 𝗭 is uniquely determined, as is the subspace spanned by the columns of 𝗭.

Covariance matrix square roots are closely related to ensemble representations. The sample covariance 𝗣a k of an m_-member analysis ensemble is given by 𝗣_a k = 𝗦𝗦T/(m βˆ’ 1), where the columns of the n Γ— m matrix 𝗦 are mean-zero analysis perturbations about the analysis ensemble mean; the rank of 𝗣a k is at most (m βˆ’ 1). A matrix square root of the analysis error covariance matrix 𝗣a k is the matrix of scaled analysis perturbation ensemble members 𝗭a k = (m βˆ’ 1)βˆ’1/2𝗦.

The Kalman SRF algorithm replaces error covariance evolution equations (1) and (2) with equations for the evolution of forecast and analysis error covariance square roots

𝗭f k

and

𝗭a k

in a manner that avoids forming the full error covariance matrices. If the model error covariance 𝗀k is neglected, (1) can be replaced by

f k k a _k_βˆ’1(3)

In the ensemble context, (3) means to apply the tangent linear dynamics to each column of the

𝗭a _k_βˆ’1

, that is, to each scaled analysis perturbation ensemble member. Practically, (3) can be implemented by applying the full nonlinear dynamics to each analysis ensemble member. For what follows, we only assume that the forecast error covariance matrix square root

𝗭f k

is available and do not assume or restrict that it be calculated from (3). Section 3b discusses more sophisticated methods of generating

𝗭f k

that can include estimates of model error and give a forecast error covariance matrix square root whose rank is greater than the number of perturbations evolved by the dynamical model.

Next, analysis error covariance equation (2) is replaced with an equation for the analysis error covariance square root

𝗭a k

. This equation determines how to form an analysis ensemble with appropriate statistics. Initial implementations of the EnKF formed the new analysis ensemble by updating each forecast ensemble member using the same analysis equations, equivalent to applying the linear operator (π—œ βˆ’ π—žk_𝗛_k) to the forecast perturbation ensemble

𝗭f k

. This procedure gives an analysis ensemble whose error covariance is (π—œ βˆ’ π—žk_𝗛_k)

𝗣f k

(π—œ βˆ’ π—žk_𝗛_k)T and includes analysis error due to forecast error; the Kalman gain π—žk depends on the relative size of forecast and observation error, and the factor (π—œ βˆ’ π—žk_𝗛_k) shows how much forecast errors are reduced. However, in this procedure the analysis ensemble does not include uncertainty due to observation error and so underestimates analysis error. A stochastic solution to this problem proposed independently by Houtekamer and Mitchell (1998) and Burgers et al. (1998) is to compute analyses using each forecast ensemble member and, instead of using a single realization of the observations, to use an ensemble of simulated observations whose statistics reflect the observation error. This method is equivalent to the analysis perturbation ensemble update

a k k k f k k k(4)

where π—ͺk is a p Γ— m matrix whose m columns are identically distributed, mean-zero, Gaussian random vectors of length p with covariance π—₯k/m. The perturbed observation analysis equation (4) gives an analysis perturbation ensemble with correct expected statistics:

i1520-0493-131-7-1485-e5

i1520-0493-131-7-1485-e5

However, the perturbed observation approach introduces an additional source of sampling error that reduces analysis error covariance accuracy and increases the probability of underestimating analysis error covariance (Whitaker and Hamill 2002). A Monte Carlo method avoiding perturbed observations is described in Pham (2001). The singular evolutive interpolate Kalman (SEIK) filter uses both deterministic factorization and stochastic approaches.

Kalman SRFs provide a deterministic algorithm for transforming the forecast ensemble into an analysis ensemble with consistent statistics. The β€œPotter method” for the Kalman SRF analysis update (Bierman 1977) is obtained by rewriting (2) as

i1520-0493-131-7-1485-e6

i1520-0493-131-7-1485-e6

where we define the m Γ— p matrix 𝗩k ≑ (𝗛k

𝗭f k

)T and the p Γ— p innovation covariance matrix 𝗗k ≑

𝗩T_k_

𝗩k + π—₯k. Then the analysis perturbation ensemble is calculated from

a k f k k k(7)

where 𝗫k

𝗫T_k_

= (π—œ βˆ’ 𝗩k

π——βˆ’1_k_𝗩T_k_

) and 𝗨k is an arbitrary m Γ— m orthogonal matrix. As formulated, the updated ensemble

𝗭a k

is a linear combination of the columns of

𝗭f k

and is obtained by inverting the p Γ— p matrix 𝗗k and computing a matrix square root 𝗫k of the m Γ— m matrix (π—œ βˆ’ 𝗩k

π——βˆ’1_k_𝗩T_k_

).

3. Ensemble SRFs

a. Analysis ensemble

In many typical earth science data assimilation applications the state-dimension n and the number of observations p are large, and the method for computing the matrix square root of (π—œ βˆ’ 𝗩k

π——βˆ’1_k_𝗩T_k_

) and the updated analysis perturbation ensemble

𝗭a k

must be chosen accordingly. A direct approach is to solve first the linear system 𝗗k_𝗬_k = 𝗛k

𝗭f k

for the p Γ— m matrix 𝗬k, that is, to solve

k f k T_k_ k k k f k(8)

as is done in the first step of the Physical-space Statistical Analysis System (PSAS) algorithm (Cohn et al. 1998). Then, the m Γ— m matrix π—œ βˆ’ 𝗩k

π——βˆ’1_k_𝗩T_k_

= π—œ βˆ’ (𝗛k

𝗭f k

)T𝗬k is formed, and its matrix square root 𝗫k computed and applied to

𝗭f k

as in (7). Solution of (8), even when p is large, is practical when the forecast error covariance has a low-rank representation and the inverse of the observation error covariance is available (see the appendix). Iterative methods whose cost is on the order of the cost of applying the innovation covariance matrix are appropriate when the forecast error covariance is represented by a correlation model.

When observation errors are uncorrelated, observations can be assimilated one at a time or serially (Houtekamer and Mitchell 2001; Bishop et al. 2001). For a single observation, p = 1, 𝗩k is a column vector, and the innovation 𝗗k is a scalar. In this case, a matrix square root of (π—œ βˆ’ 𝗩k

π——βˆ’1_k_𝗩T_k_

) can be computed in closed form by taking the ansatz,

βˆ’1_k_ k T_k_ Ξ² k k T_k_ Ξ² k k T_k_T(9)

and solving for the scalar Ξ² k, which gives Ξ² k = [𝗗k Β± (π—₯k_𝗗_k)1/2]βˆ’1. The analysis ensemble update for p = 1 is

a k f k Ξ² k k T_k_(10)

see Andrews (1968) for a general solution involving matrix square roots of p Γ— p matrices. At observation locations, the analysis error ensemble is related to the forecast error ensemble by 𝗛k

𝗭a k

= (1 βˆ’ Ξ² k

𝗩T_k_

𝗩k)𝗛k

𝗭f k

. The scalar factor (1 βˆ’ Ξ² k

𝗩T_k_

𝗩k) has an absolute value less than or equal to one and is positive when the plus sign is chosen in the definition of Ξ² k.

In Whitaker and Hamill (2002) the analysis perturbation ensemble is found from

where the matrix

K̃

k is a solution of the nonlinear equation

In the case of a single observation, a solution of (12) is

where the plus sign is chosen in the definition of Ξ² k. The corresponding analysis perturbation ensemble update,

i1520-0493-131-7-1485-e14

i1520-0493-131-7-1485-e14

is identical to (10). Observations with correlated errors, for example, radiosonde height observations from the same sounding, can be handled by applying the whitening transformation

π—₯βˆ’1/2_k_

to the observations to form a new observation set with uncorrelated errors.

Another method of computing the updated analysis ensemble is to use the Sherman–Morrison–Woodbury identity (Golub and Van Loan 1996) to show that

k βˆ’1_k_ T_k_ f_T_k T_k_ βˆ’1_k_ k f _k_βˆ’1(15)

The m Γ— m matrix on the right-hand side of (15) is practical to compute when the inverse observation error covariance matrix

π—₯βˆ’1_k_

is available. This approach avoids inverting the p Γ— p matrix 𝗗k and is used in the ensemble transform Kalman filter (ETKF) where the analysis update is (Bishop et al. 2001)

a k f k kΞ“_k_βˆ’1/2(16)

𝗖kΞ“k

𝗖T_k_

is the eigenvalue decomposition of

𝗭f_T_k_𝗛T_k_π—₯βˆ’1_k

𝗛k

𝗭f k

. Note that the matrix 𝗖k of orthonormal eigenvectors is not uniquely determined.1 Comparison with (15) shows that 𝗖k(Ξ“k + π—œ)βˆ’1

𝗖T_k_

is the eigenvalue decomposition of π—œ βˆ’ 𝗩k

π——βˆ’1_k_𝗩T_k_

and thus that 𝗖k(Ξ“k + π—œ)βˆ’1/2 is a square root of (π—œ βˆ’ 𝗩k

π——βˆ’1_k_𝗩T_k_

).

In the ensemble adjustment Kalman filter (EAKF) the form of the analysis ensemble update is (Anderson 2001)

a k k f k(17)

the ensemble adjustment matrix 𝗔k is defined by

where

𝗣f k

= 𝗙k

π—š2_k_𝗙T_k_

is the eigenvalue decomposition of

𝗣f k

and the orthogonal matrix

C̃

k is chosen so that

C̃T_k_

π—šk

𝗙T_k_𝗛T_k_π—₯βˆ’1_k_

𝗛k_𝗙_k_π—š_k

C̃

k =

Ξ“Μƒ

k is diagonal.2 Choosing the orthogonal matrix

C̃

k to be

C̃

k =

π—šβˆ’1_k_𝗙T_k_𝗭f k

𝗖k gives that

Ξ“Μƒ

k = Ξ“k and that the ensemble adjustment matrix is

The EAKF analysis update (17) becomes

a k f k kΞ“k_βˆ’1/2 βˆ’1_k T_k_ f k(20)

The EAKF analysis ensemble given by (20) is the same as applying the transformation

π—šβˆ’1_k_𝗙T_k_𝗭f k

to the ETKF analysis ensemble. The matrix

π—šβˆ’1_k_𝗙T_k_𝗭f k

is orthogonal and is, in fact, the matrix of right singular vectors of

𝗭f k

. Therefore, 𝗖k(π—œ + Ξ“k)βˆ’1/2

π—šβˆ’1_k_𝗙T_k_𝗭f k

is a matrix square root of (π—œ βˆ’ 𝗩k

π——βˆ’1_k_𝗩T_k_

).

Beginning with the same forecast error covariance, the direct, serial, ETKF, and EAKF methods produce different analysis ensembles that span the same state-space subspace and have the same covariance. Higher-order statistical moments of the different models will be different, a relevant issue for nonlinear dynamics. The computation costs of the direct, ETKF, and EAKF methods are seen in Table 1 to scale comparably (see the appendix for details). There are differences in precise computational cost; for instance, the EAKF contains an additional singular value decomposition (SVD) calculation of the forecast with cost O(_m_3 + _m_2). The computational cost of the serial filter is less dependent on the rank of the forecast error covariance and more sensitive to the number of observations. This difference is important when techniques to account for model error and control filter divergence, as described in the next section, result in an effective forecast error covariance dimension m much larger than the dynamical forecast ensemble dimension.

b. Forecast error statistics

In the previous section we examined methods of forming the analysis ensemble given a matrix square root of the forecast error covariance. There are two fundamental problems associated with directly using the ensemble generated by (3). First, ensemble size is limited by the computational cost of applying the forecast model to each ensemble member. Small ensembles have few degrees of freedom available to represent errors and suffer from sampling error that further degrades forecast error covariance representation. Sampling error leads to loss of accuracy and underestimation of error covariances that can cause filter divergence. Techniques to deal with this problem are distance-dependent covariance filtering and covariance inflation (Whitaker and Hamill 2002). Covariance localization in the serial method consists of adding a Schur product to the definition of K̃ (Whitaker and Hamill 2002). Similarly, observations effecting analysis grid points can be restricted to be nearby in the EAKF (Anderson 2001).

The second and less easily resolved problem with using the ensemble generated by (3) is the neglect of model error and resulting underestimation of the forecast error covariance. Since there is little theoretical knowledge of model error statistics in complex systems, model error parameterizations combined with adaptive methods are likely necessary (Dee 1995). When the model error covariance 𝗀k is taken to have large-scale structure, a reasonable representation is an ensemble or square root decomposition, 𝗀k =

𝗭d k_𝗭_d_T_k

, where

𝗭d k

is an n Γ— q matrix. Then, a square root of

𝗣f k

is the n Γ— m matrix:

f k a k d k(21)

where m = m e + q and m e is the number of dynamically evolved forecast perturbations. With this model error representation, ensemble size grows by q with each forecast–analysis cycle. Ensemble size can be limited by computing the singular value decomposition of the ensemble and discarding components with small variance (Heemink et al. 2001). A larger ensemble with evolved analysis error and model error could be used in the analysis step, and a smaller ensemble used in the dynamical forecast stage. When the model error covariance 𝗀k is approximated as an operator, for instance using a correlation model, Lanczos methods can be used to compute the leading eigenmodes of 𝗠k

𝗭a _k_βˆ’1

(𝗠k

𝗭a _k_βˆ’1

)T + 𝗀k and form

𝗭f k

(Cohn and Todling 1996). Such a forecast error covariance model would resemble those used in β€œhybrid” methods (Hamill and Snyder 2000). In this case, the rank of

𝗭f k

can be substantially larger than the forecast ensemble size, making the serial method attractive. Monte Carlo solutions are another option as in Mitchell and Houtekamer (2000), where model error parameters were estimated from innovations and used to generate realizations of model error. Perturbing model physics, as done in system simulation, explicitly accounts for some aspects of model uncertainty (Houtekamer et al. 1996).

4. Summary and discussion

Ensemble forecast/assimilation methods use low-rank ensemble representations of forecast and analysis error covariance matrices. These ensembles are scaled matrix square roots of the error covariance matrices, and so ensemble data assimilation methods can be viewed as square root filters (SRFs; Bierman 1977). After assimilation of observations, the analysis ensemble can be constructed stochastically or deterministically. Deterministic construction of the analysis ensemble eliminates one source of sampling error and leads to deterministic SRFs being more accurate than stochastic SRFs in some examples (Whitaker and Hamill 2002; Anderson 2001). SRFs are not unique since different ensembles can have the same covariance. This lack of uniqueness is illustrated in three recently proposed ensemble data assimilation methods that use the Kalman SRF method to update the analysis ensemble (Bishop et al. 2001; Anderson 2001; Whitaker and Hamill 2002). Identifying the methods as SRFs allows a clearer discussion and comparison of their different analysis ensemble updates.

Accounting for small ensemble size and model deficiencies remains a significant issue in ensemble data assimilation systems. Schur products can be used to filter ensemble covariances and effectively increase covariance rank (Houtekamer and Mitchell 1998, 2001; Hamill et al. 2001; Whitaker and Hamill 2002). Covariance inflation is one simple way of accounting for model error and stabilizing the filter (Hamill et al. 2001; Anderson 2001; Whitaker and Hamill 2002). Hybrid methods represent forecast error covariances with a combination of ensemble and parameterized correlation models (Hamill and Snyder 2000). Here we have shown deterministic methods of including model error into a square root or ensemble data assimilation system when the model error has large-scale representation and when the model error is represented by a correlation model. However, the primary difficulty remains obtaining estimates of model error.

The nonuniqueness of SRFs has been exploited in estimation theory to design filters with desirable computational and numerical properties. An open question is whether there are ensemble properties that would make a particular SRF implementation better than another, or if the only issue is computational cost. For instance, it may be possible to choose an analysis update scheme that preserves higher-order, non-Gaussian statistics of the forecast ensemble. This question can only be answered by detailed comparisons of different methods in a realistic setting where other details of the assimilation system such as modeling of systematic errors or data quality control may prove to be as important.

Acknowledgments

Comments and suggestions of two anonymous reviews improved the presentation of this work. The authors thank Ricardo Todling (NASA DAO) for his comments, suggestions, and corrections. IRI is supported by its sponsors and NOAA Office of Global Programs Grant NA07GP0213. Craig Bishop received support under ONR Project Element 0601153N, Project Number BE-033-0345, and also ONR Grant N00014-00-1-0106.

REFERENCES

APPENDIX

Computational Costs

Here we detail the computational cost scalings summarized in Table 1. All the methods require applying the observation operator to the ensemble members to form 𝗛k_𝗭_f k, and we do not include its cost. This cost is important when comparing ensemble and nonensemble methods, particularly for complex observation operators. The cost of computing 𝗛k_𝗭_f k is formally O(mnp), but may be significantly less when 𝗛k is sparse or can be applied efficiently. We also assume that the inverse observation error covariance π—₯βˆ’1_k_ is inexpensive to apply.

Direct method

  1. Solve (𝗛k
    𝗣f k_𝗛T_k
    + π—₯k)𝗬k = 𝗛k
    𝗭f k
    for 𝗬k. If π—₯βˆ’1 is available, the solution can be obtained using the Sherman–Morrison–Woodbury identity (Golub and Van Loan 1996),
    i1520-0493-131-7-1485-eqa1
    i1520-0493-131-7-1485-eqa1
    and only inverting m Γ— m matrices. Cost: O(_m_3 + m_2_p).
  2. Form π—œ βˆ’ (𝗛k_𝗭_f k)T𝗬k. Cost: O(_pm_2).
  3. Compute matrix square root of the m Γ— m matrix π—œ βˆ’ (𝗛k_𝗭_f k)T𝗬k. Cost: O(_m_3).
  4. Apply matrix square root to 𝗭f. Cost: O(m_2_n).

Total cost: O(_m_3 + m_2_p + m_2_n).

Serial method

For each observation:

  1. Form 𝗗. Cost: O(m).
  2. Form π—œ βˆ’ Ξ²_𝗩𝗩T and apply to 𝗭_f k. Cost: O(nm).

Total cost: O(mp + mnp).

ETKF

  1. Form 𝗭f_T𝗛Tπ—₯βˆ’1𝗛𝗭_f. Assume π—₯βˆ’1 inexpensive to apply. Cost: O(m_2_p).
  2. Compute eigenvalue decomposition of m Γ— m matrix. Cost: O(_m_3).
  3. Apply to 𝗭f. Cost: O(m_2_n).

Total cost: O(m_2_p + _m_3 + m_2_n).

EAKF

Cost in addition to ETKF:

  1. Eigenvalue decomposition of 𝗣f (low rank). Cost: O(m_2_n + _m_3).
  2. Form 𝗙T𝗭f. Cost: O(m_2_p).

Total cost: O(m_2_p + _m_3 + m_2_n).

Table 1.

Summary of analysis ensemble calculation computational cost as a function of forecast ensemble size m, number of observations p, and state dimension n

Table 1.

Table 1.