Bootstrapping Efficiency Probabilities in Parametric Stochastic Frontier Models (original) (raw)
Related papers
A Monte Carlo Study of Efficiency Estimates from Frontier Models
SSRN Electronic Journal, 2000
Parametric stochastic frontier models yield firm-level conditional distributions of inefficiency that are truncated normal. Given these distributions, how should one assess and rank firm-level efficiency? This study compares the techniques of estimating a) the conditional mean of inefficiency and b) probabilities that firms are most or least efficient.
A Monte Carlo study of ranked efficiency estimates from frontier models
Journal of Productivity Analysis, 2012
Parametric stochastic frontier models yield firm-level conditional distributions of inefficiency that are truncated normal. Given these distributions, how should one assess and rank firm-level efficiency? This study compares the techniques of estimating a) the conditional mean of inefficiency and b) probabilities that firms are most or least efficient. Monte Carlo experiments suggest that the efficiency probabilities are more reliable in terms of mean absolute percent error when inefficiency has large variation across firms. Along the way we tackle some interesting problems associated with simulating and assessing estimator performance in the stochastic frontier environment.
Journal of the American Statistical Association, 2016
The issues of functional form, distributions of the error components and endogeneity are for the most part still open in stochastic frontier models. The same is true when it comes to imposition of restrictions of monotonicity and curvature, making efficiency estimation an elusive goal. In this paper we attempt to consider these problems simultaneously and offer practical solutions to the problems raised by Stone (2002) and addressed in Badunenko, Henderson and Kumbhakar (2012). We provide major extensions to smoothly mixing regressions and fractional polynomial approximations for both the functional form of the frontier and the structure of inefficiency. Endogeneity is handled, simultaneously, using copulas. We provide detailed computational experiments and an application to US banks. To explore the posteriors of the new models we rely heavily on Sequential Monte Carlo techniques.
Confidence statements for efficiency estimates from stochastic frontier models
Journal of Productivity Analysis, 1996
This paper is an empirical study of the uncertainty associated with technical efficiency estimates from stochastic frontier models. We show how to construct confidence intervals for estimates of technical efficiency levels under different sets of assumptions ranging from the very strong to the relatively weak. We demonstrate empirically how the degree of uncertainty associated with these estimates relates to the strength of the assumptions made and to various features of the data.
Empirical Economics, 2008
In this paper, we use the local maximum likelihood (LML) method proposed by Kumbhakar et al. (J Econom, 2007) to estimate stochastic cost frontier models for a sample of 3,691 U.S. commercial banks. This method relaxes several deficiencies in the econometric estimation of frontier functions. In particular, we relax the assumption that all banks share the same production technology and provide bank-specific measures of returns to scale and cost inefficiency. The LML method is applied to estimate the cost frontiers in which a truncated normal distribution is used to model technical inefficiency. This formulation allows the cost frontier, inefficiency effects and heteroskedasticity in both noise and inefficiency components to be quite flexible.
Journal of Productivity Analysis, 2007
We study the construction of confidence intervals for efficiency levels of individual firms in stochastic frontier models with panel data. The focus is on bootstrapping and related methods. We start with a survey of various versions of the bootstrap. We also propose a simple parametric alternative in which one acts as if the identity of the best firm is known. Monte Carlo simulations indicate that the parametric method works better than the percentile bootstrap, but not as well as bootstrap methods that make bias corrections. All of these methods are valid only for large time-series sample size (T), and correspondingly none of the methods yields very accurate confidence intervals except when T is large enough that the identity of the best firm is clear. We also present empirical results for two well-known data sets.
Statistical Inference In Nonparametric Frontier Models: The State of the Art
Journal of Productivity Analysis, 2000
Efficiency scores of firms are measured by their distance to an estimated production frontier. The economic literature proposes several nonparametric frontier estimators based on the idea of enveloping the data (FDH and DEA-type estimators). Many have claimed that FDH and DEA techniques are non-statistical, as opposed to econometric approaches where particular parametric expressions are posited to model the frontier. We can now define a statistical model allowing determination of the statistical properties of the nonparametric estimators in the multi-output and multi-input case. New results provide the asymptotic sampling distribution of the FDH estimator in a multivariate setting and of the DEA estimator in the bivariate case. Sampling distributions may also be approximated by bootstrap distributions in very general situations. Consequently, statistical inference based on DEA/FDH-type estimators is now possible. These techniques allow correction for the bias of the efficiency estimators and estimation of confidence intervals for the efficiency measures. This paper summarizes the results which are now available, and provides a brief guide to the existing literature. Emphasizing the role of hypotheses and inference, we show how the results can be used or adapted for practical purposes.
Journal of Econometrics, 1988
A stochastic frontier production function is defined for panel data on sample firms, such that the disturbances associated with observations for a given firm involve the differences between traditional symmet.ric random errors and a non-negative random variable, which is associated with the technical efficiency of the firm. Given that the non-negative firm effects are time-invariant and have a general truncated normal distribution, we obtain the best predictor for the firm-effect random variable and the appropriate technical efficiency of an individual firm, given the values of the disturbances in the model. The results obtained are a generalization of tho~z presented by Jondrow et al. (1982) for a cross-sectional model in which the firm effects have half-normal distribution. The model is applied in the analysis of three years of data for dairy farms in Australia.
Efficiency measurement using a latent class stochastic frontier model
Empirical Economics, 2004
Efficiency estimation in stochastic frontier models typically assumes that the underlying production technology is the same for all firms. There might, however, be unobserved differences in technologies that might be inappropriately labeled as inefficiency if such variations in technology are not taken into account. We address this issue by estimating a latent class stochastic frontier model in a panel data framework. An application of the model is presented using Spanish banking data. Our results show that bank-heterogeneity can be fully controlled when a model with four classes is estimated.
Estimation of Inefficiency using a Firm-specific Frontier Model
2013
It has been argued that the deterministic frontier approach in inefficiency measurement has a major limitation as inefficiency is mixed with measurement error (statistical noise) in this approach. The result is that inefficiency is contaminated with noise. Later stochastic frontier approach improves the situation with allowing a statistical noise in the model which captures all other factors other than inefficiency. The stochastic frontier model has been used for inefficiency analysis despite its complicated form and estimation procedure. This paper introduced an extra parameter which estimates the amount of proportion that an error component shares in the observational error. An EM estimation approach is used for estimation of the model and a test procedure is developed to test the significance of presence of the error component in the observational error.