Stata | FAQ: Explanation of the delta method (original) (raw)

What is the delta method and how is it used to estimate the standard error of a transformed parameter?

Title Explanation of the delta method
Author Alan H. Feiveson, NASA

The delta method, in its essence, expands a function of a random variable about its mean, usually with a one-step Taylor approximation, and then takes the variance. For example, if we want to approximate the variance of G(X) where X is a random variable with mean mu and G() is differentiable, we can try

    G(X) = G(mu) + (X-mu)G'(mu)        (approximately)

so that

    Var(G(X)) = Var(X)*[G'(mu)]2      (approximately)

where G'() = dG/dX. This is a good approximation only if X has a high probability of being close enough to its mean (mu) so that the Taylor approximation is still good.

This idea can easily be expanded to vector-valued functions of random vectors,

    Var(**G**(**X**)) = **G**'(**mu**) Var(**X**) [**G**'(**mu**)]T

and that, in fact, is the basis for deriving the asymptotic variance of maximum-likelihood estimators. In the above, X is a 1 x m row vector; Var(X) is its m x m variance–covariance matrix; G() is a vector function returning a 1 x n row vector; and G'() is its n x m matrix of first derivatives. T is the transpose operator. Var(G(X)) is the resulting n x n variance–covariance matrix of G(X).

Acknowledgments

Nicholas Cox of Durham University and John Gleason of Syracuse University provided the references.

References

Oehlert, G. W. 1992.

A note on the delta method.American Statistician 46: 27–29.

Rice, John. 1994.

Mathematical Statistics and Data Analysis. 2nd ed. Duxbury.