Weighted Bootstrap with Probability in Regression (original) (raw)
Related papers
Estimating regression coefficients using weighted bootstrap with probability
WSEAS Transactions on Mathematics, 2009
In this paper we propose a new Weighted Bootstrap with Probability (WBP). The basic idea of the proposed bootstrap technique is to do re-sampling with probabilities. These probabilities become the control mechanism for getting good estimates when the original data set contain multiple outliers. Numerical examples and simulation study are carried out to evaluate the performance of the WBP estimates as compared to the Bootstrap 1 and Diagnostic-Before Bootstrap estimates. The results of the study signify that the WBP method is more efficient than the other two methods.
Bootstrap Confidence Intervals in Linear Models: Case of Outliers
Iqtisodiy taraqqiyot va tahlil, 2024
Confidence interval estimations in linear models have been of large interest in social science. However, traditional approach of building confidence intervals has a set of assumption including dataset having no extreme outliers. In this study, we discuss presence of severe outliers in linear models and suggest bootstrap approach as an alternative way to construct confidence intervals. We conclude that bootstrap confidence intervals can outperform traditional confidence intervals in presence of outliers when sample size is small or population distribution is not normal. Lastly, we encourage researchers to run a computer simulation to evaluate conclusions of this study.
Acta Scientiarum. Technology, 2013
The bootstrap method is generally performed by presupposing that each sample unit would show the same probability of being re-sampled. However, when a sample with outliers is taken into account, the empirical distribution generated by this method may be influenced, or rather, it may not accurately represent the original sample. Current study proposes a bootstrap algorithm that allows the use of measures of influence in the calculation of re-sampling probabilities. The method was reproduced in simulation scenarios taking into account the logistic growth curve model and the CovRatio measurement to evaluate the impact of an influential observation in the determinacy of the matrix of the co-variance of parameter estimates. In most cases, bias estimates were reduced. Consequently, the method is suitable to be used in non-linear models and allows the researcher to apply other measures for better bias reductions.
ROBUSTNESS OF BOOTSTRAP ESTIMATORS TO INFLUENTIAL OBSERVATIONS
Anale. Seria Informatică, 2019
The existence of outliers in the original sample may create problem to the classical bootstrapping estimates. There is possibility that the bootstrap samples may contain more outliers than the original dataset since the bootstrap re-sampling is with replacement. Consequently, the outliers will have an unduly effect on the classical bootstrap mean and standard deviation. This study examined the performance of three bootstrap estimators namely: Case Bootstrapping, Fixed-X Bootstrapping and Residual Resampling method under different levels of outliers. The objective was to determine which of these bootstrap methods is resistant to the presence of outliers in the data. Three levels of outliers; 5%, 10% and 20% were considered and injected into sample sizes, N = 20, 30, 50, and 100 each replicated 1000 and 5000 times respectively. The performances of the bootstrap methods were evaluated using the mean, standard error, absolute bias, mean square error and the root mean square error. The results showed that the Residual resampling Bootstrap performed better than the other two estimators.
1 Abstract Bootstrap Confidence Intervals in Linear Models
2016
A bootstrap method for generating confidence intervals in linear nonobservable bootstrap sample of true residuals leading to an observable final result. This means that the only error in the method is the pure bootstrap error obtained by replacing the true residual distribution by the empirical one. It is shown that the method is valid, having the same asymptotic conditional distribution as the ordinary bootstrap method. Simulations indicate clearly that the abstract bootstrap method works better than the ordinary bootstrap method for small samples. 2 1.
New bootstrap inference for spurious regression problems
Journal of Applied Statistics, 2015
The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Intentionally biased bootstrap methods
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 1999
A class of weighted bootstrap techniques, called biased bootstrap or b-bootstrap methods, is introduced. It is motivated by the need to adjust empirical methods, such as the`uniform' bootstrap, in a surgical way to alter some of their features while leaving others unchanged. Depending on the nature of the adjustment, the b-bootstrap can be used to reduce bias, or to reduce variance or to render some characteristic equal to a predetermined quantity. Examples of the last application include a b-bootstrap approach to hypothesis testing in nonparametric contexts, where the b-bootstrap enables simulation`under the null hypothesis', even when the hypothesis is false, and a b-bootstrap competitor to Tibshirani's variance stabilization method. An example of the bias reduction application is adjustment of Nadaraya±Watson kernel estimators to make them competitive with local linear smoothing. Other applications include density estimation under constraints, outlier trimming, sensitivity analysis, skewness or kurtosis reduction and shrinkage.
Research Square (Research Square), 2022
This research applies wild bootstrap techniques to a linear regression model with heteroscedasticity and outliers using Monte-Carlo techniques. For this research, the Monte-Carlo method was applied to generate samples of 120 and 200 observations from random samples of sizes 40 that were duplicated three and five times, respectively. However, the stochastic error for different sampling sizes similarly follows a normal distribution and the linear regression model is based on a fixed X variable for varied sample sizes. The regression model was introduced a form of heteroscedasticity to justify how the parameter estimate performed. By replacing a suitable outlier's observation for a good observation, the outliers are generated. The optimal approach was determined by applying the BootWu, BootLiu, RBootWu, and RBootLiu in the presence of outliers and heteroscedasticity errors. The robust location and scale, the wild bootstrap sampling procedure of Wu's and Liu's, and the Least Median Squares (LMS) estimator were combined with the Alamgir redescending M-estimate weighted function to compare this estimator. Using bias, RMSE, and average standard error, the performance of the proposed methods RWBootWu and RWBootLiu is compared to the performance of the current methods RBootWu, RBootLiu, BootWu, and BootLiu. The simulation study's findings show that RWBootWu and RWBootLiu are good alternatives to existing estimators for regression models. Mathematics Subject Classification 62F35 Robustness and adaptive procedures (parametric inference) 62F40 Bootstrap, jackknife and other resampling methods 62H12 Estimation in multivariate analysis 62J07 Ridge regression; shrinkage estimators (Lasso) 62K25 Robust parameter designs