Quantile Regression Estimates for a Class of Linear and Partially Linear Errors-in-Variables Models (original) (raw)
Errors in the Dependent Variable of Quantile Regression Models
2019
The main step is Step 5, the piecewise-linear sieve-ML estimator described in Section 3.1. Because this piecewise-linear estimator is computationally intensive, we use a series of preliminary steps to find start values in the neighborhood of the optimum. 16 These steps significantly reduce the time required for convergence of the piecewise-linear estimator. (1) We estimate quantile regression on a grid of knots [t 1 t 2 t J ], where J is the number of knots, and denote the estimate as β QR (•). (2) We run 40 weighted least squares (WLS) iterations using β QR (•) from Step 1 as the start value. Using WLS in some fashion is a common technique in quantile regression computational programs and in our case is motivated by the fact that under a normality assumption of the EIV term ε, the maximum likelihood estimator is equivalent to a weighted least squares one. Supplemental Material Appendix Section A.1 demonstrates this equivalence and also specifies the weights for the WLS iterations. We denote the weighted least squares estimate as β WLS (•). (3) We estimate a piecewise-constant maximum likelihood estimator using (β WLS (•) σ D) as the start value, where σ D is a default start value for EIV parameters. In our simulations, where we estimate EIVs as mixtures of three normals, our start values for the EIV parameters specify three equally weighted mixtures with means −1,
arXiv: Statistics Theory, 2017
In a classical regression model, it is usually assumed that the explanatory variables are independent of each other and error terms are normally distributed. But when these assumptions are not met, situations like the error terms are not independent or they are not identically distributed or both of these, LSE will not be robust. Hence, quantile regression has been used to complement this deficiency of classical regression analysis and to improve the least square estimation (LSE). In this study, we consider preliminary test and shrinkage estimation strategies for quantile regression models with independently and non-identically distributed (i.ni.d.) errors. A Monte Carlo simulation study is conducted to assess the relative performance of the estimators. Also, we numerically compare their performance with Ridge, Lasso, Elastic Net penalty estimation strategies. A real data example is presented to illustrate the usefulness of the suggested methods. Finally, we obtain the asymptotic re...
Finite Sample Inference for Quantile Regression Models
Under minimal assumptions finite sample confidence bands for quantile regression models can be constructed. These confidence bands are based on the "conditional pivotal property" of estimating equations that quantile regression methods aim to solve and will provide valid finite sample inference for both linear and nonlinear quantile models regardless of whether the covariates are endogenous or exogenous. The confidence regions can be computed using MCMC, and confidence bounds for single parameters of interest can be computed through a simple combination of optimization and search algorithms. We illustrate the finite sample procedure through a brief simulation study and two empirical examples: estimating a heterogeneous demand elasticity and estimating heterogeneous returns to schooling. In all cases, we find pronounced differences between confidence regions formed using the usual asymptotics and confidence regions formed using the finite sample procedure in cases where the usual asymptotics are suspect, such as inference about tail quantiles or inference when identification is partial or weak. The evidence strongly suggests that the finite sample methods may usefully complement existing inference methods for quantile regression when the standard assumptions fail or are suspect.
Statistical papers, 2016
Quantile regression is an important tool for describing the characteristics of conditional distributions. Population conditional quantile functions cannot cross for different quantile orders. Unfortunately estimated regression quantile curves often violate this and cross each other, which can be very annoying for interpretations and further analysis. In this paper we are concerned with flexible varying-coefficient modelling, and develop methods for quantile regression that ensure that the estimated quantile curves do not cross. A second aim of the paper is to allow for some heteroscedasticity in the error modelling, and to also estimate the associated variability function. We investigate the finite-sample performances of the discussed methods via simulation studies. Some applications to real data illustrate the use of the methods in practical settings.
M-quantile regression: diagnostics and parametric representation of the model
2016
M-quantile regression generalizes both quantile and expectile regression using M-estimation ideas. This paper covers several topics related to estimation, model assessment and hypothesis testing that were so far neglected in the many articles about M-quantile regression methods that appeared in recent years.
Quantile regression and heteroskedasticity
2011
Abstract This note introduces a wrapper for qreg which reports standard errors and t statistics that are asymptotically valid under heteroskedasticity and misspecification of the quantile regression function. Moreover, the result of an heteroskedasticity test are also presented to guide the researcher in the choice of the appropriate covariance matrix estimator to use. Key words: Bootstrap, Covariance matrix, Robust standard errors.
Pretest and Stein-Type Estimations in Quantile Regression Model
arXiv: Statistics Theory, 2017
In this study, we consider preliminary test and shrinkage estimation strategies for quantile regression models. In classical Least Squares Estimation (LSE) method, the relationship between the explanatory and explained variables in the coordinate plane is estimated with a mean regression line. In order to use LSE, there are three main assumptions on the error terms showing white noise process of the regression model, also known as Gauss-Markov Assumptions, must be met: (1) The error terms have zero mean, (2) The variance of the error terms is constant and (3) The covariance between the errors is zero i.e., there is no autocorrelation. However, data in many areas, including econometrics, survival analysis and ecology, etc. does not provide these assumptions. First introduced by Koenker, quantile regression has been used to complement this deficiency of classical regression analysis and to improve the least square estimation. The aim of this study is to improve the performance of quan...
Goodness of Fit and Related Inference Processes for Quantile Regression
Journal of the American Statistical Association, 1999
We introduce a goodness-of-fit process for quantile regression analogous to the conventional R 2 statistic of least squares regression. Several related inference processes designed to test composite hypotheses about the combined effect of several covariates over an entire range of conditional quantile functions are also formulated. The asymptotic behavior of the inference processes is shown to be closely related to earlier p-sample goodness-of-fit theory involving Bessel processes. The approach is illustrated with some hypothetical examples, an application to recent empirical models of international economic growth, and some Monte Carlo evidence.
Bias correction for quantile regression estimators
arXiv (Cornell University), 2020
We study the bias of classical quantile regression and instrumental variable quantile regression estimators. While being asymptotically first-order unbiased, these estimators can have non-negligible second-order biases. We derive a higher-order stochastic expansion of these estimators using empirical process theory. Based on this expansion, we derive an explicit formula for the second-order bias and propose a feasible bias correction procedure that uses finite-difference estimators of the bias components. The proposed bias correction method performs well in simulations. We provide an empirical illustration using Engel's classical data on household expenditure.