Ensemble Square Root Filters (original) (raw)
1. Introduction
Data assimilation addresses the problem of producing useful analyses and forecasts given imperfect dynamical models and observations. The Kalman filter is the optimal data assimilation method for linear dynamics with additive, state-independent Gaussian model and observation errors (Cohn 1997). An attractive feature of the Kalman filter is its calculation of forecast and analysis error covariances, in addition to the forecasts and analyses themselves. In this way, the Kalman filter produces estimates of forecast and analysis uncertainty, consistent with the dynamics and prescribed model and observation error statistics. However, the error covariance calculation components of the Kalman filter are difficult to implement in realistic systems because of (i) their computational cost, (ii) the nonlinearity of the dynamics, and (iii) poorly characterized error sources.
The ensemble Kalman filter (EnKF), proposed by Evensen (1994), addresses the first two of these problems by using ensemble representations for the forecast and analysis error covariances. Ensemble size limits the number of degrees of freedom used to represent forecast and analysis errors, and Kalman filter error covariance calculations are practical for modest-sized ensembles. The EnKF algorithm begins with an analysis ensemble whose mean is the current state estimate or analysis and whose statistics reflect the analysis error. Applying the full nonlinear dynamics to each analysis ensemble member produces the forecast ensemble; tangent linear and adjoint models of the dynamics are not required. Statistics of the forecast ensemble represent forecast errors; in its simplest form, the EnKF only accounts for forecast error due to uncertain initial conditions, neglecting forecast error due to model deficiencies. The forecast ensemble mean and covariance are then used to assimilate observations and compute a new analysis ensemble with appropriate statistics, and the cycle is repeated. The new analysis ensemble can be formed either stochastically (Houtekamer and Mitchell 1998; Burgers et al. 1998) or deterministically (Bishop et al. 2001; Anderson 2001; Whitaker and Hamill 2002). Deterministic methods were developed to address the adaptive observational network design problem and to avoid sampling issues associated with the use of βperturbed observationsβ in stochastic analysis ensemble update methods.
The EnKF and other ensemble data assimilation methods belong to the family of square root filters (SRFs), and a purpose of this paper is to demonstrate that deterministic analysis ensemble updates are implementations of Kalman SRFs (Bierman 1977; Maybeck 1982; Heemink et al. 2001). An immediate benefit of this identification is a framework for understanding and comparing deterministic analysis ensemble update schemes (Bishop et al. 2001; Anderson 2001; Whitaker and Hamill 2002). SRFs, like ensemble representations of covariances, are not unique. We begin our discussion in section 2 with a presentation of the Kalman SRF; issues related to implementation of ensemble SRFs are presented in section 3; in section 4 we summarize our results.
2. The Kalman SRF
Kalman SRF algorithms, originally developed for space-navigation systems with limited computational word length, demonstrate superior numerical precision and stability compared to the standard Kalman filter algorithm (Bierman 1977; Maybeck 1982). SRFs by construction avoid loss of positive definiteness of the error covariance matrices. SRFs have been used in earth science data assimilation methods where error covariances are approximated by truncated eigenvector expansions (Verlaan and Heemink 1997).
The usual Kalman filter covariance evolution equations are
where
π£f k
and
π£a k
are, respectively, the n Γ n forecast and analysis error covariance matrices at time t k; π k is the tangent linear dynamics; πk is the p Γ n observation operator; π₯k is the p Γ p observation error covariance; π€k is the n Γ n model error covariance matrix and πk β‘
π£f k_πT_k
(πk
π£f k_πT_k
+ π₯k)β1 is the Kalman gain; n is the dimension of the system state; and p is the number of observations. The error covariance evolution depends on the state estimates and observations through the tangent linear dynamics π k. The propagation of analysis errors by the dynamics with model error acting as a forcing is described by Eq. (1). Equation (2) shows how an optimal data assimilation scheme uses observations to produce an analysis whose error covariance is less than that of the forecast.
The forecast and analysis error covariance matrices are symmetric positive-definite matrices and can be represented as π£f k = πf k_π_f_T_k and π£a k = πa k_π_a_T_k, where the matrices πf k and πa k are matrix square roots of π£f k and π£a k, respectively; other matrix factorizations can be used in filters as well (Bierman 1977; Pham et al. 1998). A covariance matrix and its matrix square root have the same rank or number of nonzero singular values. When a covariance matrix π£ has rank m, there is an n Γ m matrix square root π satisfying π£ = ππT; in low-rank covariance representations the rank m is much less than the state-space dimension n. This representation is not unique; π£ can also be represented as π£ = (ππ¨)(ππ¨)T, where the matrix π¨ is any m Γ m orthogonal transformation π¨π¨T = π¨Tπ¨ = π. The projection βxTπβ2 = xTπ£x of an arbitrary _n_-vector x onto the matrix square root π is uniquely determined, as is the subspace spanned by the columns of π.
Covariance matrix square roots are closely related to ensemble representations. The sample covariance π£a k of an m_-member analysis ensemble is given by π£_a k = π¦π¦T/(m β 1), where the columns of the n Γ m matrix π¦ are mean-zero analysis perturbations about the analysis ensemble mean; the rank of π£a k is at most (m β 1). A matrix square root of the analysis error covariance matrix π£a k is the matrix of scaled analysis perturbation ensemble members πa k = (m β 1)β1/2π¦.
The Kalman SRF algorithm replaces error covariance evolution equations (1) and (2) with equations for the evolution of forecast and analysis error covariance square roots
πf k
and
πa k
in a manner that avoids forming the full error covariance matrices. If the model error covariance π€k is neglected, (1) can be replaced by
f k k a _k_β1(3)
In the ensemble context, (3) means to apply the tangent linear dynamics to each column of the
πa _k_β1
, that is, to each scaled analysis perturbation ensemble member. Practically, (3) can be implemented by applying the full nonlinear dynamics to each analysis ensemble member. For what follows, we only assume that the forecast error covariance matrix square root
πf k
is available and do not assume or restrict that it be calculated from (3). Section 3b discusses more sophisticated methods of generating
πf k
that can include estimates of model error and give a forecast error covariance matrix square root whose rank is greater than the number of perturbations evolved by the dynamical model.
Next, analysis error covariance equation (2) is replaced with an equation for the analysis error covariance square root
πa k
. This equation determines how to form an analysis ensemble with appropriate statistics. Initial implementations of the EnKF formed the new analysis ensemble by updating each forecast ensemble member using the same analysis equations, equivalent to applying the linear operator (π β πk_π_k) to the forecast perturbation ensemble
πf k
. This procedure gives an analysis ensemble whose error covariance is (π β πk_π_k)
π£f k
(π β πk_π_k)T and includes analysis error due to forecast error; the Kalman gain πk depends on the relative size of forecast and observation error, and the factor (π β πk_π_k) shows how much forecast errors are reduced. However, in this procedure the analysis ensemble does not include uncertainty due to observation error and so underestimates analysis error. A stochastic solution to this problem proposed independently by Houtekamer and Mitchell (1998) and Burgers et al. (1998) is to compute analyses using each forecast ensemble member and, instead of using a single realization of the observations, to use an ensemble of simulated observations whose statistics reflect the observation error. This method is equivalent to the analysis perturbation ensemble update
a k k k f k k k(4)
where πͺk is a p Γ m matrix whose m columns are identically distributed, mean-zero, Gaussian random vectors of length p with covariance π₯k/m. The perturbed observation analysis equation (4) gives an analysis perturbation ensemble with correct expected statistics:
However, the perturbed observation approach introduces an additional source of sampling error that reduces analysis error covariance accuracy and increases the probability of underestimating analysis error covariance (Whitaker and Hamill 2002). A Monte Carlo method avoiding perturbed observations is described in Pham (2001). The singular evolutive interpolate Kalman (SEIK) filter uses both deterministic factorization and stochastic approaches.
Kalman SRFs provide a deterministic algorithm for transforming the forecast ensemble into an analysis ensemble with consistent statistics. The βPotter methodβ for the Kalman SRF analysis update (Bierman 1977) is obtained by rewriting (2) as
where we define the m Γ p matrix π©k β‘ (πk
πf k
)T and the p Γ p innovation covariance matrix πk β‘
π©T_k_
π©k + π₯k. Then the analysis perturbation ensemble is calculated from
a k f k k k(7)
where π«k
π«T_k_
= (π β π©k
πβ1_k_π©T_k_
) and π¨k is an arbitrary m Γ m orthogonal matrix. As formulated, the updated ensemble
πa k
is a linear combination of the columns of
πf k
and is obtained by inverting the p Γ p matrix πk and computing a matrix square root π«k of the m Γ m matrix (π β π©k
πβ1_k_π©T_k_
).
3. Ensemble SRFs
a. Analysis ensemble
In many typical earth science data assimilation applications the state-dimension n and the number of observations p are large, and the method for computing the matrix square root of (π β π©k
πβ1_k_π©T_k_
) and the updated analysis perturbation ensemble
πa k
must be chosen accordingly. A direct approach is to solve first the linear system πk_π¬_k = πk
πf k
for the p Γ m matrix π¬k, that is, to solve
k f k T_k_ k k k f k(8)
as is done in the first step of the Physical-space Statistical Analysis System (PSAS) algorithm (Cohn et al. 1998). Then, the m Γ m matrix π β π©k
πβ1_k_π©T_k_
= π β (πk
πf k
)Tπ¬k is formed, and its matrix square root π«k computed and applied to
πf k
as in (7). Solution of (8), even when p is large, is practical when the forecast error covariance has a low-rank representation and the inverse of the observation error covariance is available (see the appendix). Iterative methods whose cost is on the order of the cost of applying the innovation covariance matrix are appropriate when the forecast error covariance is represented by a correlation model.
When observation errors are uncorrelated, observations can be assimilated one at a time or serially (Houtekamer and Mitchell 2001; Bishop et al. 2001). For a single observation, p = 1, π©k is a column vector, and the innovation πk is a scalar. In this case, a matrix square root of (π β π©k
πβ1_k_π©T_k_
) can be computed in closed form by taking the ansatz,
β1_k_ k T_k_ Ξ² k k T_k_ Ξ² k k T_k_T(9)
and solving for the scalar Ξ² k, which gives Ξ² k = [πk Β± (π₯k_π_k)1/2]β1. The analysis ensemble update for p = 1 is
a k f k Ξ² k k T_k_(10)
see Andrews (1968) for a general solution involving matrix square roots of p Γ p matrices. At observation locations, the analysis error ensemble is related to the forecast error ensemble by πk
πa k
= (1 β Ξ² k
π©T_k_
π©k)πk
πf k
. The scalar factor (1 β Ξ² k
π©T_k_
π©k) has an absolute value less than or equal to one and is positive when the plus sign is chosen in the definition of Ξ² k.
In Whitaker and Hamill (2002) the analysis perturbation ensemble is found from
where the matrix
KΜ
k is a solution of the nonlinear equation
In the case of a single observation, a solution of (12) is
where the plus sign is chosen in the definition of Ξ² k. The corresponding analysis perturbation ensemble update,
is identical to (10). Observations with correlated errors, for example, radiosonde height observations from the same sounding, can be handled by applying the whitening transformation
π₯β1/2_k_
to the observations to form a new observation set with uncorrelated errors.
Another method of computing the updated analysis ensemble is to use the ShermanβMorrisonβWoodbury identity (Golub and Van Loan 1996) to show that
k β1_k_ T_k_ f_T_k T_k_ β1_k_ k f _k_β1(15)
The m Γ m matrix on the right-hand side of (15) is practical to compute when the inverse observation error covariance matrix
π₯β1_k_
is available. This approach avoids inverting the p Γ p matrix πk and is used in the ensemble transform Kalman filter (ETKF) where the analysis update is (Bishop et al. 2001)
a k f k kΞ_k_β1/2(16)
πkΞk
πT_k_
is the eigenvalue decomposition of
πf_T_k_πT_k_π₯β1_k
πk
πf k
. Note that the matrix πk of orthonormal eigenvectors is not uniquely determined.1 Comparison with (15) shows that πk(Ξk + π)β1
πT_k_
is the eigenvalue decomposition of π β π©k
πβ1_k_π©T_k_
and thus that πk(Ξk + π)β1/2 is a square root of (π β π©k
πβ1_k_π©T_k_
).
In the ensemble adjustment Kalman filter (EAKF) the form of the analysis ensemble update is (Anderson 2001)
a k k f k(17)
the ensemble adjustment matrix πk is defined by
where
π£f k
= πk
π2_k_πT_k_
is the eigenvalue decomposition of
π£f k
and the orthogonal matrix
CΜ
k is chosen so that
CΜT_k_
πk
πT_k_πT_k_π₯β1_k_
πk_π_k_π_k
CΜ
k =
ΞΜ
k is diagonal.2 Choosing the orthogonal matrix
CΜ
k to be
CΜ
k =
πβ1_k_πT_k_πf k
πk gives that
ΞΜ
k = Ξk and that the ensemble adjustment matrix is
The EAKF analysis update (17) becomes
a k f k kΞk_β1/2 β1_k T_k_ f k(20)
The EAKF analysis ensemble given by (20) is the same as applying the transformation
πβ1_k_πT_k_πf k
to the ETKF analysis ensemble. The matrix
πβ1_k_πT_k_πf k
is orthogonal and is, in fact, the matrix of right singular vectors of
πf k
. Therefore, πk(π + Ξk)β1/2
πβ1_k_πT_k_πf k
is a matrix square root of (π β π©k
πβ1_k_π©T_k_
).
Beginning with the same forecast error covariance, the direct, serial, ETKF, and EAKF methods produce different analysis ensembles that span the same state-space subspace and have the same covariance. Higher-order statistical moments of the different models will be different, a relevant issue for nonlinear dynamics. The computation costs of the direct, ETKF, and EAKF methods are seen in Table 1 to scale comparably (see the appendix for details). There are differences in precise computational cost; for instance, the EAKF contains an additional singular value decomposition (SVD) calculation of the forecast with cost O(_m_3 + _m_2). The computational cost of the serial filter is less dependent on the rank of the forecast error covariance and more sensitive to the number of observations. This difference is important when techniques to account for model error and control filter divergence, as described in the next section, result in an effective forecast error covariance dimension m much larger than the dynamical forecast ensemble dimension.
b. Forecast error statistics
In the previous section we examined methods of forming the analysis ensemble given a matrix square root of the forecast error covariance. There are two fundamental problems associated with directly using the ensemble generated by (3). First, ensemble size is limited by the computational cost of applying the forecast model to each ensemble member. Small ensembles have few degrees of freedom available to represent errors and suffer from sampling error that further degrades forecast error covariance representation. Sampling error leads to loss of accuracy and underestimation of error covariances that can cause filter divergence. Techniques to deal with this problem are distance-dependent covariance filtering and covariance inflation (Whitaker and Hamill 2002). Covariance localization in the serial method consists of adding a Schur product to the definition of KΜ (Whitaker and Hamill 2002). Similarly, observations effecting analysis grid points can be restricted to be nearby in the EAKF (Anderson 2001).
The second and less easily resolved problem with using the ensemble generated by (3) is the neglect of model error and resulting underestimation of the forecast error covariance. Since there is little theoretical knowledge of model error statistics in complex systems, model error parameterizations combined with adaptive methods are likely necessary (Dee 1995). When the model error covariance π€k is taken to have large-scale structure, a reasonable representation is an ensemble or square root decomposition, π€k =
πd k_π_d_T_k
, where
πd k
is an n Γ q matrix. Then, a square root of
π£f k
is the n Γ m matrix:
f k a k d k(21)
where m = m e + q and m e is the number of dynamically evolved forecast perturbations. With this model error representation, ensemble size grows by q with each forecastβanalysis cycle. Ensemble size can be limited by computing the singular value decomposition of the ensemble and discarding components with small variance (Heemink et al. 2001). A larger ensemble with evolved analysis error and model error could be used in the analysis step, and a smaller ensemble used in the dynamical forecast stage. When the model error covariance π€k is approximated as an operator, for instance using a correlation model, Lanczos methods can be used to compute the leading eigenmodes of π k
πa _k_β1
(π k
πa _k_β1
)T + π€k and form
πf k
(Cohn and Todling 1996). Such a forecast error covariance model would resemble those used in βhybridβ methods (Hamill and Snyder 2000). In this case, the rank of
πf k
can be substantially larger than the forecast ensemble size, making the serial method attractive. Monte Carlo solutions are another option as in Mitchell and Houtekamer (2000), where model error parameters were estimated from innovations and used to generate realizations of model error. Perturbing model physics, as done in system simulation, explicitly accounts for some aspects of model uncertainty (Houtekamer et al. 1996).
4. Summary and discussion
Ensemble forecast/assimilation methods use low-rank ensemble representations of forecast and analysis error covariance matrices. These ensembles are scaled matrix square roots of the error covariance matrices, and so ensemble data assimilation methods can be viewed as square root filters (SRFs; Bierman 1977). After assimilation of observations, the analysis ensemble can be constructed stochastically or deterministically. Deterministic construction of the analysis ensemble eliminates one source of sampling error and leads to deterministic SRFs being more accurate than stochastic SRFs in some examples (Whitaker and Hamill 2002; Anderson 2001). SRFs are not unique since different ensembles can have the same covariance. This lack of uniqueness is illustrated in three recently proposed ensemble data assimilation methods that use the Kalman SRF method to update the analysis ensemble (Bishop et al. 2001; Anderson 2001; Whitaker and Hamill 2002). Identifying the methods as SRFs allows a clearer discussion and comparison of their different analysis ensemble updates.
Accounting for small ensemble size and model deficiencies remains a significant issue in ensemble data assimilation systems. Schur products can be used to filter ensemble covariances and effectively increase covariance rank (Houtekamer and Mitchell 1998, 2001; Hamill et al. 2001; Whitaker and Hamill 2002). Covariance inflation is one simple way of accounting for model error and stabilizing the filter (Hamill et al. 2001; Anderson 2001; Whitaker and Hamill 2002). Hybrid methods represent forecast error covariances with a combination of ensemble and parameterized correlation models (Hamill and Snyder 2000). Here we have shown deterministic methods of including model error into a square root or ensemble data assimilation system when the model error has large-scale representation and when the model error is represented by a correlation model. However, the primary difficulty remains obtaining estimates of model error.
The nonuniqueness of SRFs has been exploited in estimation theory to design filters with desirable computational and numerical properties. An open question is whether there are ensemble properties that would make a particular SRF implementation better than another, or if the only issue is computational cost. For instance, it may be possible to choose an analysis update scheme that preserves higher-order, non-Gaussian statistics of the forecast ensemble. This question can only be answered by detailed comparisons of different methods in a realistic setting where other details of the assimilation system such as modeling of systematic errors or data quality control may prove to be as important.
Acknowledgments
Comments and suggestions of two anonymous reviews improved the presentation of this work. The authors thank Ricardo Todling (NASA DAO) for his comments, suggestions, and corrections. IRI is supported by its sponsors and NOAA Office of Global Programs Grant NA07GP0213. Craig Bishop received support under ONR Project Element 0601153N, Project Number BE-033-0345, and also ONR Grant N00014-00-1-0106.
REFERENCES
- Anderson, J. L., 2001: An ensemble adjustment filter for data assimilation. _Mon. Wea. Rev., 129 , 2884β2903.
Anderson, J. L., 2001: An ensemble adjustment filter for data assimilation. Mon. Wea. Rev., 129, 2884β2903.
)| false - Andrews, A., 1968: A square root formulation of the Kalman covariance equations. _AIAA J., 6 , 1165β1168.
Andrews, A., 1968: A square root formulation of the Kalman covariance equations. AIAA J., 6, 1165β1168.
)| false - Bierman, G. J., 1977: Factorization Methods for Discrete Sequential Estimation. .
Bierman, G. J., 1977: Factorization Methods for Discrete Sequential Estimation. Vol. 128, Mathematics in Science and Engineering, Academic Press, 241 pp.
)| false - Bishop, C. H., B. Etherton, and S. J. Majumdar, 2001: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. _Mon. Wea. Rev., 129 , 420β436.
Bishop, C. H., B. Etherton, and S. J. Majumdar, 2001: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Mon. Wea. Rev., 129, 420β436.
)| false - Burgers, G., P. J. van Leeuwen, and G. Evensen, 1998: On the analysis scheme in the ensemble Kalman filter. _Mon. Wea. Rev., 126 , 1719β1724.
Burgers, G., P. J. van Leeuwen, and G. Evensen, 1998: On the analysis scheme in the ensemble Kalman filter. Mon. Wea. Rev., 126, 1719β1724.
)| false - Cohn, S. E., 1997: An introduction to estimation theory. _J. Meteor. Soc. Japan, 75 , 257β288.
Cohn, S. E., 1997: An introduction to estimation theory. J. Meteor. Soc. Japan, 75, 257β288.
)| false - Cohn, S. E., and R. Todling, 1996: Approximate data assimilation schemes for stable and unstable dynamics. _J. Meteor. Soc. Japan, 74 , 63β75.
Cohn, S. E., and R. Todling, 1996: Approximate data assimilation schemes for stable and unstable dynamics. J. Meteor. Soc. Japan, 74, 63β75.
)| false - Cohn, S. E., A. M. da Silva, J. Guo, M. Sienkiewicz, and D. Lamich, 1998: Assessing the effect of data selection with the DAO Physical-space Statistical Analysis System. _Mon. Wea. Rev., 126 , 2913β2926.
Cohn, S. E., A. M. da Silva, J. Guo, M. Sienkiewicz, and D. Lamich, 1998: Assessing the effect of data selection with the DAO Physical-space Statistical Analysis System. Mon. Wea. Rev., 126, 2913β2926.
)| false - Dee, D. P., 1995: On-line estimation of error covariance parameters for atmospheric data assimilation. _Mon. Wea. Rev., 123 , 1128β1145.
Dee, D. P., 1995: On-line estimation of error covariance parameters for atmospheric data assimilation. Mon. Wea. Rev., 123, 1128β1145.
)| false - Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. _J. Geophys. Res., 99 , 1043β1062.
Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res., 99, 1043β1062.
)| false - Golub, G. H., and C. F. Van Loan, 1996: Matrix Computations. 3d ed. .
Golub, G. H., and C. F. Van Loan, 1996: Matrix Computations. 3d ed. The Johns Hopkins University Press, 694 pp.
)| false - Hamill, T. M., and C. Snyder, 2000: A hybrid ensemble Kalman filterβ3D variational analysis scheme. _Mon. Wea. Rev., 128 , 2905β2919.
Hamill, T. M., and C. Snyder, 2000: A hybrid ensemble Kalman filterβ3D variational analysis scheme. Mon. Wea. Rev., 128, 2905β2919.
)| false - Hamill, T. M., J. S. Whitaker, and C. Snyder, 2001: Distance-dependent filtering of background error covariance estimates in an ensemble Kalman filter. _Mon. Wea. Rev., 129 , 2776β2790.
Hamill, T. M., J. S. Whitaker, and C. Snyder, 2001: Distance-dependent filtering of background error covariance estimates in an ensemble Kalman filter. Mon. Wea. Rev., 129, 2776β2790.
)| false - Heemink, A. W., M. Verlaan, and A. J. Segers, 2001: Variance reduced ensemble Kalman filtering. _Mon. Wea. Rev., 129 , 1718β1728.
Heemink, A. W., M. Verlaan, and A. J. Segers, 2001: Variance reduced ensemble Kalman filtering. Mon. Wea. Rev., 129, 1718β1728.
)| false - Houtekamer, P. L., and H. L. Mitchell, 1998: Data assimilation using an ensemble Kalman filter technique. _Mon. Wea. Rev., 126 , 796β811.
Houtekamer, P. L., and H. L. Mitchell, 1998: Data assimilation using an ensemble Kalman filter technique. Mon. Wea. Rev., 126, 796β811.
)| false - Houtekamer, P. L., and H. L. Mitchell, 2001: A sequential ensemble Kalman filter for atmospheric data assimilation. _Mon. Wea. Rev., 129 , 123β137.
Houtekamer, P. L., and H. L. Mitchell, 2001: A sequential ensemble Kalman filter for atmospheric data assimilation. Mon. Wea. Rev., 129, 123β137.
)| false - Houtekamer, P. L., L. Lefaivre, J. Derome, H. Ritchie, and H. L. Mitchel, 1996: A system simulation approach to ensemble prediction. _Mon. Wea. Rev., 124 , 1225β1242.
Houtekamer, P. L., L. Lefaivre, J. Derome, H. Ritchie, and H. L. Mitchel, 1996: A system simulation approach to ensemble prediction. Mon. Wea. Rev., 124, 1225β1242.
)| false - Maybeck, P. S., 1982: Stochastic Models, Estimation, and Control. Vol. 1. .
Maybeck, P. S., 1982: Stochastic Models, Estimation, and Control. Vol. 1. Academic Press, 423 pp.
)| false - Mitchell, H. L., and P. L. Houtekamer, 2000: An adaptive ensemble Kalman filter. _Mon. Wea. Rev., 128 , 416β433.
Mitchell, H. L., and P. L. Houtekamer, 2000: An adaptive ensemble Kalman filter. Mon. Wea. Rev., 128, 416β433.
)| false - Pham, D., 2001: Stochastic methods for sequential data assimilation in strongly nonlinear systems. _Mon. Wea. Rev., 129 , 1194β1207.
Pham, D., 2001: Stochastic methods for sequential data assimilation in strongly nonlinear systems. Mon. Wea. Rev., 129, 1194β1207.
)| false - Pham, D., J. Verron, and M. Roubaud, 1998: A singular evolutive extended Kalman filter for data assimilation in oceanography. _J. Mar. Syst., 16 , 323β340.
Pham, D., J. Verron, and M. Roubaud, 1998: A singular evolutive extended Kalman filter for data assimilation in oceanography. J. Mar. Syst., 16, 323β340.
)| false - Verlaan, M., and A. W. Heemink, 1997: Tidal flow forecasting using reduced rank square filters. _Stochastic Hydrol. Hydraul., 11 , 349β368.
Verlaan, M., and A. W. Heemink, 1997: Tidal flow forecasting using reduced rank square filters. Stochastic Hydrol. Hydraul., 11, 349β368.
)| false - Whitaker, J., and T. M. Hamill, 2002: Ensemble data assimilation without perturbed observations. _Mon. Wea. Rev., 130 , 1913β1924.
Whitaker, J., and T. M. Hamill, 2002: Ensemble data assimilation without perturbed observations. Mon. Wea. Rev., 130, 1913β1924.
)| false
APPENDIX
Computational Costs
Here we detail the computational cost scalings summarized in Table 1. All the methods require applying the observation operator to the ensemble members to form πk_π_f k, and we do not include its cost. This cost is important when comparing ensemble and nonensemble methods, particularly for complex observation operators. The cost of computing πk_π_f k is formally O(mnp), but may be significantly less when πk is sparse or can be applied efficiently. We also assume that the inverse observation error covariance π₯β1_k_ is inexpensive to apply.
Direct method
- Solve (πk
π£f k_πT_k
+ π₯k)π¬k = πk
πf k
for π¬k. If π₯β1 is available, the solution can be obtained using the ShermanβMorrisonβWoodbury identity (Golub and Van Loan 1996),
and only inverting m Γ m matrices. Cost: O(_m_3 + m_2_p). - Form π β (πk_π_f k)Tπ¬k. Cost: O(_pm_2).
- Compute matrix square root of the m Γ m matrix π β (πk_π_f k)Tπ¬k. Cost: O(_m_3).
- Apply matrix square root to πf. Cost: O(m_2_n).
Total cost: O(_m_3 + m_2_p + m_2_n).
Serial method
For each observation:
- Form π. Cost: O(m).
- Form π β Ξ²_π©π©T and apply to π_f k. Cost: O(nm).
Total cost: O(mp + mnp).
ETKF
- Form πf_TπTπ₯β1ππ_f. Assume π₯β1 inexpensive to apply. Cost: O(m_2_p).
- Compute eigenvalue decomposition of m Γ m matrix. Cost: O(_m_3).
- Apply to πf. Cost: O(m_2_n).
Total cost: O(m_2_p + _m_3 + m_2_n).
EAKF
Cost in addition to ETKF:
- Eigenvalue decomposition of π£f (low rank). Cost: O(m_2_n + _m_3).
- Form πTπf. Cost: O(m_2_p).
Total cost: O(m_2_p + _m_3 + m_2_n).
Table 1.
Summary of analysis ensemble calculation computational cost as a function of forecast ensemble size m, number of observations p, and state dimension n