Evaluating Statistical Hypotheses Using Weakly-Identifiable Estimating Functions (original) (raw)
Related papers
Inference for Semiparametric Models: Some Questions and an Answer
2001
Non-and semi-parametric models have flourished during the last twenty five years. They have been applied widely and their theoretical properties have been studied extensively. We briefly review some of their development and list a few questions that we would like to see addressed. We develop an answer to one of these questions by formulating a 'calculus' similar to that of the i.i.d. case that enables us to analyze the efficiency of procedures in general semiparametric models when a nonparametric model has been defined. Our approach is illustrated by applying it to regression models, counting process models in survival analysis and submodels of Markov chains, which traditionally require elaborate special arguments. In the examples, the calculus lets us easily read off the structure of efficient estimators and check if a candidate estimator is efficient.
Partial Identification and Confidence Intervals ∗
2009
We consider statistical inference on a single component of a parameter vector that satisfies a finite number of moment inequalities. The null hypothesis for this single component is given a dual characterization as a composite hypothesis regarding point identified parameters. We also are careful in the specification of the alternative hypothesis that also has a dual characterization as a composite hypothesis regarding point identified parameters. This setup substantially simplifies the conceptual basis of the inference problem. For an interval identified parameter we obtain closed form expressions for the confidence interval obtained by inverting the test statistic of the composite null against the composite alternative. JEL code: C01, C12
A note on nonparametric estimations
Asymptotic Methods in Probability and Statistics, 1998
We give an informal explanation with the help of a Taylor expansion about the most important properties of the maximum likelihood estimate in the parametric case. Then an analogous estimate in two nonparametric models, in the estimate of the empirical distribution function from censored data and in the Cox model is investigated. It is shown that an argument very similar to the proof in the parametric case yields analogous properties of the estimates in these cases too. There is an important non-trivial step in the proofs which is discussed in more detail. A double stochastic integral with respect to a standardized empirical process has to be estimated. This corresponds to the estimate of the second term of the Taylor expansion in the parametric case. We think that the method explained in this paper is applicable in several other models.
Asymptotic Properties for a Class of Partially Identified Models
Econometrica, 2008
We propose inference procedures for partially identified population features for which the population identification region can be written as a transformation of the Aumann expectation of a properly defined set valued random variable (SVRV). An SVRV is a mapping that associates a set (rather than a real number) with each element of the sample space. Examples of population features in this class include sample means and best linear predictors with interval outcome data, and parameters of semiparametric binary models with interval regressor data. We extend the analogy principle to SVRVs, and show that the sample analog estimator of the population identification region is given by a transformation of a Minkowski average of SVRVs. Using the results of the mathematics literature on SVRVs, we show that this estimator converges in probability to the identification region of the model with respect to the Hausdorff distance. We then show that the Hausdorff distance between the estimator and the population identification region, when properly normalized by √ n, converges in distribution to the supremum of a Gaussian process whose covariance kernel depends on parameters of the population identification region. We provide consistent bootstrap procedures to approximate this limiting distribution. Using similar arguments as those applied for vector valued random variables, we develop a methodology to test assumptions about the true identification region and to calculate the power of the test. We show that these results can be used to construct a confidence collection, that is a collection of sets that, when specified as null hypothesis for the true value of the population identification region, cannot be rejected by our test.
2006).“ Asymptotic Properties for a Class of Partially Identified Models, ” Working Paper
2014
We propose inference procedures for partially identified population features for which the population identification region can be written as a transformation of the Aumann expectation of a properly defined set valued random variable (SVRV). An SVRV is a mapping that asso-ciates a set (rather than a real number) with each element of the sample space. Examples of population features in this class include sample means and best linear predictors with interval outcome data, and parameters of semiparametric binary models with interval regressor data. We extend the analogy principle to SVRVs, and show that the sample analog estimator of the population identification region is given by a transformation of a Minkowski average of SVRVs. Using the results of the mathematics literature on SVRVs, we show that this estimator con-verges in probability to the identification region of the model with respect to the Hausdorff distance. We then show that the Hausdorff distance between the estimator an...
On identifiability of parametric statistical models
Journal of the Italian Statistical Society, 1994
This is a review article on statistical identifiability. Besides the definition of the main concepts, we deal with several questions relevant to the statistician: parallelism between parametric identifiability and sample sufficiency; relationship of identifiability with measures of sample information and with the inferential concept of estimability; several strategies of making inferences in unidentifiable models with emphasis on the distinct behaviour of the classical and Bayesian approaches. The concepts, ideas and methods discussed are illustrated with simple examples of statistical models.
Efficient semiparametric estimators via modified profile likelihood
Journal of Statistical Planning and Inference, 2005
A new strategy is developed for obtaining large-sample efficient estimators of finite-dimensional parameters within semiparametric statistical models. The key idea is to maximize over a nonparametric log-likelihood with the infinite-dimensional nuisance parameter replaced by a consistent preliminary estimator˜ of the Kullback-Leibler minimizing value for fixed . It is shown that the parametric submodel with Kullback-Leibler minimizer substituted for is generally a least-favorable model. Results extending those of Severini and Wong (Ann. Statist. 20 (1992) 1768) then establish efficiency of the estimator of maximizing log-likelihood with replaced for fixed by˜ . These theoretical results are specialized to censored linear regression and to a class of semiparametric survival analysis regression models including the proportional hazards models with unobserved random effect or 'frailty', the latter through results of Slud and Vonta (Scand. J. Statist. 31 characterizing the restricted Kullback-Leibler information minimizers.
Considerations on Partially Identified Regression Models
SSRN Electronic Journal, 2000
and especially their partial identification analysis of the regression model where one covariate is only interval-measured, we offer several contributions. propose two estimation approaches in this context, focussing on general results. The modified minimum distance (MMD) estimates the true identified set and the modified method of moments (MMM) a superset. Our first contribution is to characterize the true identified set and the superset. Second, we complete and extend the Monte Carlo study of . We present benchmark results using the exact functional form for the expectation of the dependent variable conditional on observables to compare with results using its nonparametric estimates, and illustrate the superiority of MMD over MMM. For MMD, we propose a simple shortcut for estimation.
Parameterization and inference for nonparametric regression problems
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2001
We consider local likelihood or local estimating equations, in which a multivariate function Â(.) is estimated but a derived function (.) of Â(.) is of interest. In many applications, when most naturally formulated the derived function is a non-linear function of Â(.). In trying to understand whether the derived non-linear function is constant or linear, a problem arises with this approach: when the function is actually constant or linear, the expectation of the function estimate need not be constant or linear, at least to second order. In such circumstances, the simplest standard methods in nonparametric regression for testing whether a function is constant or linear cannot be applied. We develop a simple general solution which is applicable to nonparametric regression, varying-coef®cient models, nonparametric generalized linear models, etc. We show that, in local linear kernel regression, inference about the derived function (.) is facilitated without a loss of power by reparameterization so that (.) is itself a component of Â(.). Our approach is in contrast with the standard practice of choosing Â(.) for convenience and allowing (.) to be a non-linear function of Â(.). The methods are applied to an important data set in nutritional epidemiology.