1 Abstract Bootstrap Confidence Intervals in Linear Models (original) (raw)

Bootstrap confidence intervals: A comparative simulation study

arXiv (Cornell University), 2024

Bootstrap is a widely used technique that allows estimating the properties of a given estimator, such as its bias and standard error. In this paper, we evaluate and compare five bootstrap-based methods for making confidence intervals: two of them (Normal and Studentized) based on the bootstrap estimate of the standard error; another two (Quantile and Better) based on the estimated distribution of the parameter estimator; and finally, considering an interval constructed based on Bayesian bootstrap, relying on the notion of credible interval. The methods are compared through Monte Carlo simulations in different scenarios, including samples with autocorrelation induced by a copula model. The results are also compared with respect to the coverage rate, the median interval length and a novel indicator, proposed in this paper, combining both of them. The results show that the Studentized method has the best coverage rate, although the smallest intervals are attained by the Bayesian method. In general, all methods are appropriate and demonstrated good performance even in the scenarios violating the independence assumption.

Bootstrap confidence intervals

1996

This article surveys bootstrap methods for producing good approximate confidence intervals. The goal is to improve by an order of magnitude upon the accuracy of the standard intervals hatthetapmz(alpha)hatsigma\hat{\theta} \pm z^{(\alpha)} \hat{\sigma}hatthetapmz(alpha)hatsigma, in a way that allows routine application even to very complicated problems. Both theory and examples are used to show how this is done. The first seven sections provide a heuristic overview of four bootstrap confidence interval procedures: BCaBC_aBCa, bootstrap-t , ABC and calibration. Sections 8 and 9 describe the theory behind these methods, and their close connection with the likelihood-based confidence interval theory developed by Barndorff-Nielsen, Cox and Reid and others.

Unsafe at any speed? A critical examination of the underlying assumptions and logical structure of bootstrapping, and of its approach to the problem of confidence intervals

The apparent freedom of bootstrapping from restrictive assumptions about the nature of the underlying distribution has made it increasingly popular in practical statistical analysis. But this has produced the dangerous illusion that it is valid in all cases. Examples are produced to show that this is not the case. When it is valid, it is shown that only one out of the many possible methods of producing confidence intervals is correct; this is based on the Neyman-Pearson technique, which the present generation of statisticians appears to have overlooked. It is speculated that the empirical distribution function on which bootstrapping is based may be inferior as an approximation to the underlying distribution to one based on cumulants, which may also hold out promise as a tool for Bayesian techniques.

Bootstrapped Confidence Intervals as an Approach to Statistical Inference

Organizational Research Methods, 2005

Confidence intervals are in many ways a more satisfactory basis for statistical inference than hypothesis tests. This article explains a simple method for using bootstrap resampling to derive confidence intervals. This method can be used for a wide variety of statistics-including the mean and median, the difference of two means or proportions, and correlation and regression coefficients. It can be implemented by an Excel spreadsheet, which is available to readers on the Web. The rationale behind the method is transparent, and it relies on almost no sophisticated statistical concepts.

Bootstrapping confidence levels for hypotheses about regression models

This paper shows how bootstrapping (using a spreadsheet) can be used to derive confidence levels for hypotheses about features of regression models - such as their shape, and the location of optimum values. The data used as an example leads to a confidence level of 67% that the sample comes from a population which displays the hypothesized inverted U shape. There is no obvious and satisfactory alternative way of deriving this result, or an equivalent result. In particular, null hypothesis tests cannot provide adequate support for this type of hypothesis. Keywords: Confidence, Regression models, Curvilinear models, Bootstrapping

Bootstrap methods in econometrics

The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one's data or a model estimated from the data. Under conditions that hold in a wide variety of econometric applications, the bootstrap provides approximations to distributions of statistics, coverage probabilities of confidence intervals, and rejection probabilities of hypothesis tests that are more accurate than the approximations of first-order asymptotic distribution theory. The reductions in the differences between true and nominal coverage or rejection probabilities can be very large. In addition, the bootstrap provides a way to carry out inference in certain settings where obtaining analytic distributional approximations is difficult or impossible. This article explains the usefulness and limitations of the bootstrap in contexts of interest in econometrics. The presentation is informal and expository. It provides an intuitive understanding of how the bootstrap works. Mathematical details are available in references that are cited.

Bootstrap confidence sets under a model misspecification

A multiplier bootstrap procedure for construction of likelihood-based confidence sets is considered for finite samples and a possible model misspecification. Theoretical results justify the bootstrap consistency for a small or moderate sample size and allow to control the impact of the parameter dimension p: the bootstrap approximation works if p 3 /n is small. The main result about bootstrap consistency continues to apply even if the underlying parametric model is misspecified under the so called Small Modeling Bias condition. In the case when the true model deviates significantly from the considered parametric family, the bootstrap procedure is still applicable but it becomes a bit conservative: the size of the constructed confidence sets is increased by the modeling bias. We illustrate the results with numerical examples for misspecified constant and logistic regressions.

Bootstraping econometric models

2009

The bootstrap is a statistical technique used more and more widely in econometrics. While it is capable of yielding very reliable inference, some precautions should be taken in order to ensure this. Two "Golden Rules" are formulated that, if observed, help to obtain the best the bootstrap can offer. Bootstrapping always involves setting up a bootstrap data-generating process (DGP). The main types of bootstrap DGP in current use are discussed, with examples of their use in econometrics. The ways in which the bootstrap can be used to construct confidence sets differ somewhat from methods of hypothesis testing. The relation between the two sorts of problem is discussed.

Bootstrap Confidence Intervals in Linear Models: Case of Outliers

Iqtisodiy taraqqiyot va tahlil, 2024

Confidence interval estimations in linear models have been of large interest in social science. However, traditional approach of building confidence intervals has a set of assumption including dataset having no extreme outliers. In this study, we discuss presence of severe outliers in linear models and suggest bootstrap approach as an alternative way to construct confidence intervals. We conclude that bootstrap confidence intervals can outperform traditional confidence intervals in presence of outliers when sample size is small or population distribution is not normal. Lastly, we encourage researchers to run a computer simulation to evaluate conclusions of this study.